Subjects -> MATHEMATICS (Total: 1013 journals)
    - APPLIED MATHEMATICS (92 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (714 journals)
    - MATHEMATICS (GENERAL) (45 journals)
    - NUMERICAL ANALYSIS (26 journals)
    - PROBABILITIES AND MATH STATISTICS (113 journals)

MATHEMATICS (714 journals)                  1 2 3 4 | Last

Showing 1 - 200 of 538 Journals sorted alphabetically
Abakós     Open Access   (Followers: 4)
Abhandlungen aus dem Mathematischen Seminar der Universitat Hamburg     Hybrid Journal   (Followers: 2)
Accounting Perspectives     Full-text available via subscription   (Followers: 4)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 13)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 5)
ACM Transactions on Mathematical Software (TOMS)     Hybrid Journal   (Followers: 5)
ACS Applied Materials & Interfaces     Hybrid Journal   (Followers: 44)
Acta Applicandae Mathematicae     Hybrid Journal   (Followers: 2)
Acta Mathematica Hungarica     Hybrid Journal   (Followers: 4)
Acta Mathematica Sinica, English Series     Hybrid Journal   (Followers: 5)
Acta Mathematica Vietnamica     Hybrid Journal  
Acta Mathematicae Applicatae Sinica, English Series     Hybrid Journal  
Advanced Science Letters     Full-text available via subscription   (Followers: 9)
Advances in Applied Clifford Algebras     Hybrid Journal   (Followers: 6)
Advances in Catalysis     Full-text available via subscription   (Followers: 7)
Advances in Complex Systems     Hybrid Journal   (Followers: 10)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 17)
Advances in Decision Sciences     Open Access   (Followers: 4)
Advances in Difference Equations     Open Access   (Followers: 4)
Advances in Fixed Point Theory     Open Access  
Advances in Geosciences (ADGEO)     Open Access   (Followers: 21)
Advances in Linear Algebra & Matrix Theory     Open Access   (Followers: 7)
Advances in Materials Science     Open Access   (Followers: 23)
Advances in Mathematical Physics     Open Access   (Followers: 6)
Advances in Mathematics     Full-text available via subscription   (Followers: 18)
Advances in Numerical Analysis     Open Access   (Followers: 4)
Advances in Operations Research     Open Access   (Followers: 13)
Advances in Operator Theory     Hybrid Journal  
Advances in Pure Mathematics     Open Access   (Followers: 10)
Advances in Science and Research (ASR)     Open Access   (Followers: 8)
Aequationes Mathematicae     Hybrid Journal   (Followers: 2)
African Journal of Educational Studies in Mathematics and Sciences     Full-text available via subscription   (Followers: 9)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 5)
Afrika Matematika     Hybrid Journal   (Followers: 2)
Air, Soil & Water Research     Open Access   (Followers: 7)
AKSIOMATIK : Jurnal Penelitian Pendidikan dan Pembelajaran Matematika     Open Access  
Al-Jabar : Jurnal Pendidikan Matematika     Open Access  
Al-Qadisiyah Journal for Computer Science and Mathematics     Open Access   (Followers: 3)
AL-Rafidain Journal of Computer Sciences and Mathematics     Open Access   (Followers: 4)
Algebra and Logic     Hybrid Journal   (Followers: 9)
Algebra Colloquium     Hybrid Journal   (Followers: 3)
Algebra Universalis     Hybrid Journal   (Followers: 3)
Algorithmic Operations Research     Open Access   (Followers: 7)
Algorithms     Open Access   (Followers: 15)
Algorithms Research     Open Access   (Followers: 2)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 4)
American Journal of Mathematical Analysis     Open Access   (Followers: 1)
American Journal of Mathematical and Management Sciences     Hybrid Journal  
American Journal of Mathematics     Full-text available via subscription   (Followers: 9)
American Journal of Operations Research     Open Access   (Followers: 6)
American Mathematical Monthly     Full-text available via subscription   (Followers: 4)
An International Journal of Optimization and Control: Theories & Applications     Open Access   (Followers: 13)
Analele Universitatii Ovidius Constanta - Seria Matematica     Open Access  
Analysis and Applications     Hybrid Journal   (Followers: 2)
Analysis and Mathematical Physics     Hybrid Journal   (Followers: 7)
Anargya : Jurnal Ilmiah Pendidikan Matematika     Open Access  
Annales Mathematicae Silesianae     Open Access  
Annales mathématiques du Québec     Hybrid Journal   (Followers: 3)
Annales Universitatis Mariae Curie-Sklodowska, sectio A – Mathematica     Open Access   (Followers: 1)
Annales Universitatis Paedagogicae Cracoviensis. Studia Mathematica     Open Access  
Annali di Matematica Pura ed Applicata     Hybrid Journal   (Followers: 1)
Annals of Combinatorics     Hybrid Journal   (Followers: 3)
Annals of Data Science     Hybrid Journal   (Followers: 18)
Annals of Functional Analysis     Hybrid Journal   (Followers: 2)
Annals of Mathematics     Full-text available via subscription   (Followers: 6)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 13)
Annals of PDE     Hybrid Journal   (Followers: 1)
Annals of Pure and Applied Logic     Open Access   (Followers: 5)
Annals of the Alexandru Ioan Cuza University - Mathematics     Open Access   (Followers: 1)
Annals of the Institute of Statistical Mathematics     Hybrid Journal   (Followers: 1)
Annals of West University of Timisoara - Mathematics     Open Access   (Followers: 1)
Annals of West University of Timisoara - Mathematics and Computer Science     Open Access   (Followers: 2)
Annuaire du Collège de France     Open Access   (Followers: 6)
ANZIAM Journal     Open Access   (Followers: 1)
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 3)
Applications of Mathematics     Hybrid Journal   (Followers: 3)
Applied Categorical Structures     Hybrid Journal   (Followers: 5)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 17)
Applied Mathematics     Open Access   (Followers: 6)
Applied Mathematics     Open Access   (Followers: 6)
Applied Mathematics & Optimization     Hybrid Journal   (Followers: 7)
Applied Mathematics - A Journal of Chinese Universities     Hybrid Journal   (Followers: 1)
Applied Mathematics and Nonlinear Sciences     Open Access   (Followers: 2)
Applied Mathematics Letters     Full-text available via subscription   (Followers: 4)
Applied Mathematics Research eXpress     Hybrid Journal   (Followers: 1)
Applied Network Science     Open Access   (Followers: 3)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 4)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 5)
Arab Journal of Mathematical Sciences     Open Access   (Followers: 3)
Arabian Journal of Mathematics     Open Access   (Followers: 1)
Archive for Mathematical Logic     Hybrid Journal   (Followers: 3)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 4)
Archive of Numerical Software     Open Access  
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 5)
Armenian Journal of Mathematics     Open Access  
Arnold Mathematical Journal     Hybrid Journal   (Followers: 1)
Artificial Satellites     Open Access   (Followers: 22)
Asia-Pacific Journal of Operational Research     Hybrid Journal   (Followers: 3)
Asian Journal of Algebra     Open Access   (Followers: 1)
Asian Research Journal of Mathematics     Open Access  
Asian-European Journal of Mathematics     Hybrid Journal   (Followers: 2)
Australian Mathematics Teacher, The     Full-text available via subscription   (Followers: 7)
Australian Primary Mathematics Classroom     Full-text available via subscription   (Followers: 5)
Australian Senior Mathematics Journal     Full-text available via subscription   (Followers: 1)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Axioms     Open Access   (Followers: 1)
Baltic International Yearbook of Cognition, Logic and Communication     Open Access   (Followers: 2)
Banach Journal of Mathematical Analysis     Hybrid Journal  
Basin Research     Hybrid Journal   (Followers: 6)
BIBECHANA     Open Access  
Biomath     Open Access  
BIT Numerical Mathematics     Hybrid Journal  
Boletim Cearense de Educação e História da Matemática     Open Access  
Boletim de Educação Matemática     Open Access  
Boletín de la Sociedad Matemática Mexicana     Hybrid Journal  
Bollettino dell'Unione Matematica Italiana     Full-text available via subscription  
British Journal for the History of Mathematics     Hybrid Journal   (Followers: 2)
British Journal of Mathematical and Statistical Psychology     Full-text available via subscription   (Followers: 18)
British Journal of Mathematics & Computer Science     Full-text available via subscription   (Followers: 1)
Buletinul Academiei de Stiinte a Republicii Moldova. Matematica     Open Access   (Followers: 3)
Bulletin des Sciences Mathamatiques     Full-text available via subscription   (Followers: 3)
Bulletin of Dnipropetrovsk University. Series : Communications in Mathematical Modeling and Differential Equations Theory     Open Access   (Followers: 3)
Bulletin of Mathematical Sciences     Open Access   (Followers: 1)
Bulletin of Symbolic Logic     Full-text available via subscription   (Followers: 4)
Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics     Open Access  
Bulletin of the Australian Mathematical Society     Full-text available via subscription   (Followers: 2)
Bulletin of the Brazilian Mathematical Society, New Series     Hybrid Journal  
Bulletin of the Iranian Mathematical Society     Hybrid Journal  
Bulletin of the London Mathematical Society     Hybrid Journal   (Followers: 3)
Bulletin of the Malaysian Mathematical Sciences Society     Hybrid Journal  
Cadernos do IME : Série Matemática     Open Access  
Calculus of Variations and Partial Differential Equations     Hybrid Journal   (Followers: 1)
Canadian Journal of Mathematics / Journal canadien de mathématiques     Hybrid Journal  
Canadian Journal of Science, Mathematics and Technology Education     Hybrid Journal   (Followers: 20)
Canadian Mathematical Bulletin     Hybrid Journal  
Carpathian Mathematical Publications     Open Access  
Catalysis in Industry     Hybrid Journal  
CAUCHY     Open Access   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 5)
CHANCE     Hybrid Journal   (Followers: 5)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 1)
Chaos, Solitons & Fractals : X     Open Access   (Followers: 1)
ChemSusChem     Hybrid Journal   (Followers: 9)
Chinese Annals of Mathematics, Series B     Hybrid Journal  
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
Chinese Journal of Mathematics     Open Access  
Ciencia     Open Access  
CODEE Journal     Open Access  
Cogent Mathematics     Open Access   (Followers: 2)
Cognitive Computation     Hybrid Journal   (Followers: 3)
Collectanea Mathematica     Hybrid Journal  
College Mathematics Journal     Hybrid Journal   (Followers: 3)
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 5)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 22)
Commentarii Mathematici Helvetici     Hybrid Journal   (Followers: 1)
Communications in Combinatorics and Optimization     Open Access  
Communications in Contemporary Mathematics     Hybrid Journal  
Communications in Mathematical Physics     Hybrid Journal   (Followers: 3)
Communications On Pure & Applied Mathematics     Hybrid Journal   (Followers: 6)
Complex Analysis and its Synergies     Open Access   (Followers: 1)
Complex Variables and Elliptic Equations: An International Journal     Hybrid Journal  
Compositio Mathematica     Full-text available via subscription   (Followers: 2)
Comptes Rendus : Mathematique     Open Access  
Computational and Applied Mathematics     Hybrid Journal   (Followers: 3)
Computational and Mathematical Methods     Hybrid Journal  
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 2)
Computational Complexity     Hybrid Journal   (Followers: 5)
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 8)
Computational Mechanics     Hybrid Journal   (Followers: 13)
Computational Methods and Function Theory     Hybrid Journal  
Computational Optimization and Applications     Hybrid Journal   (Followers: 11)
Computers & Mathematics with Applications     Full-text available via subscription   (Followers: 12)
Confluentes Mathematici     Hybrid Journal  
Constructive Mathematical Analysis     Open Access  
Contributions to Discrete Mathematics     Open Access  
Contributions to Game Theory and Management     Open Access   (Followers: 1)
COSMOS     Hybrid Journal   (Followers: 1)
Cross Section     Full-text available via subscription   (Followers: 1)
Cryptography and Communications     Hybrid Journal   (Followers: 11)
Cuadernos de Investigación y Formación en Educación Matemática     Open Access  
Cubo. A Mathematical Journal     Open Access  
Current Research in Biostatistics     Open Access   (Followers: 8)
Czechoslovak Mathematical Journal     Hybrid Journal  
Daya Matematis : Jurnal Inovasi Pendidikan Matematika     Open Access  
Demographic Research     Open Access   (Followers: 14)
Design Journal : An International Journal for All Aspects of Design     Hybrid Journal   (Followers: 35)
Desimal : Jurnal Matematika     Open Access  
Dhaka University Journal of Science     Open Access  
Differential Equations and Dynamical Systems     Hybrid Journal   (Followers: 3)
Differentsial'nye Uravneniya     Open Access  
Digital Experiences in Mathematics Education     Hybrid Journal   (Followers: 3)
Discrete Mathematics     Hybrid Journal   (Followers: 7)
Discrete Mathematics & Theoretical Computer Science     Open Access   (Followers: 1)
Discrete Mathematics, Algorithms and Applications     Hybrid Journal   (Followers: 2)
Discussiones Mathematicae - General Algebra and Applications     Open Access  
Discussiones Mathematicae Graph Theory     Open Access   (Followers: 1)
Diskretnaya Matematika     Full-text available via subscription  
Doklady Akademii Nauk     Open Access  

        1 2 3 4 | Last

Similar Journals
Journal Cover
Algorithms
Journal Prestige (SJR): 0.217
Citation Impact (citeScore): 1
Number of Followers: 15  

  This is an Open Access Journal Open Access journal
ISSN (Print) 1999-4893
Published by MDPI Homepage  [258 journals]
  • Algorithms, Vol. 16, Pages 125: Assessing the Mass Transfer Coefficient in
           Jet Bioreactors with Classical Computer Vision Methods and Neural Networks
           Algorithms

    • Authors: Irina Nizovtseva, Vladimir Palmin, Ivan Simkin, Ilya Starodumov, Pavel Mikushin, Alexander Nozik, Timur Hamitov, Sergey Ivanov, Sergey Vikharev, Alexei Zinovev, Vladislav Svitich, Matvey Mogilev, Margarita Nikishina, Simon Kraev, Stanislav Yurchenko, Timofey Mityashin, Dmitrii Chernushkin, Anna Kalyuzhnaya, Felix Blyakhman
      First page: 125
      Abstract: Development of energy-efficient and high-performance bioreactors requires progress in methods for assessing the key parameters of the biosynthesis process. With a wide variety of approaches and methods for determining the phase contact area in gas–liquid flows, the question of obtaining its accurate quantitative estimation remains open. Particularly challenging are the issues of getting information about the mass transfer coefficients instantly, as well as the development of predictive capabilities for the implementation of effective flow control in continuous fermentation both on the laboratory and industrial scales. Motivated by the opportunity to explore the possibility of applying classical and non-classical computer vision methods to the results of high-precision video records of bubble flows obtained during the experiment in the bioreactor vessel, we obtained a number of results presented in the paper. Characteristics of the bioreactor’s bubble flow were estimated first by classical computer vision (CCV) methods including an elliptic regression approach for single bubble boundaries selection and clustering, image transformation through a set of filters and developing an algorithm for separation of the overlapping bubbles. The application of the developed method for the entire video filming makes it possible to obtain parameter distributions and set dropout thresholds in order to obtain better estimates due to averaging. The developed CCV methodology was also tested and verified on a collected and labeled manual dataset. An onwards deep neural network (NN) approach was also applied, for instance the segmentation task, and has demonstrated certain advantages in terms of high segmentation resolution, while the classical one tends to be more speedy. Thus, in the current manuscript both advantages and disadvantages of the classical computer vision method (CCV) and neural network approach (NN) are discussed based on evaluation of bubbles’ number and their area defined. An approach to mass transfer coefficient estimation methodology in virtue of obtained results is also represented.
      Citation: Algorithms
      PubDate: 2023-02-21
      DOI: 10.3390/a16030125
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 126: GRNN: Graph-Retraining Neural Network for
           Semi-Supervised Node Classification

    • Authors: Jianhe Li, Suohai Fan
      First page: 126
      Abstract: In recent years, graph neural networks (GNNs) have played an important role in graph representation learning and have successfully achieved excellent results in semi-supervised classification. However, these GNNs often neglect the global smoothing of the graph because the global smoothing of the graph is incompatible with node classification. Specifically, a cluster of nodes in the graph often has a small number of other classes of nodes. To address this issue, we propose a graph-retraining neural network (GRNN) model that performs smoothing over the graph by alternating between a learning procedure and an inference procedure, based on the key idea of the expectation-maximum algorithm. Moreover, the global smoothing error is combined with the cross-entropy error to form the loss function of GRNN, which effectively solves the problem. The experiments show that GRNN achieves high accuracy in the standard citation network datasets, including Cora, Citeseer, and PubMed, which proves the effectiveness of GRNN in semi-supervised node classification.
      Citation: Algorithms
      PubDate: 2023-02-22
      DOI: 10.3390/a16030126
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 127: The Need for Speed: A Fast Guessing
           Entropy Calculation for Deep Learning-Based SCA

    • Authors: Guilherme Perin, Lichao Wu, Stjepan Picek
      First page: 127
      Abstract: The adoption of deep neural networks for profiling side-channel attacks opened new perspectives for leakage detection. Recent publications showed that cryptographic implementations featuring different countermeasures could be broken without feature selection or trace preprocessing. This success comes with a high price: an extensive hyperparameter search to find optimal deep learning models. As deep learning models usually suffer from overfitting due to their high fitting capacity, it is crucial to avoid over-training regimes, which require a correct number of epochs. For that, early stopping is employed as an efficient regularization method that requires a consistent validation metric. Although guessing entropy is a highly informative metric for profiling side-channel attacks, it is time-consuming, especially if computed for all epochs during training, and the number of validation traces is significantly large. This paper shows that guessing entropy can be efficiently computed during training by reducing the number of validation traces without affecting the efficiency of early stopping decisions. Our solution significantly speeds up the process, impacting the performance of the hyperparameter search and overall profiling attack. Our fast guessing entropy calculation is up to 16× faster, resulting in more hyperparameter tuning experiments and allowing security evaluators to find more efficient deep learning models.
      Citation: Algorithms
      PubDate: 2023-02-23
      DOI: 10.3390/a16030127
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 128: A Novel Classification Algorithm Based on
           Multidimensional F1 Fuzzy Transform and PCA Feature Extraction

    • Authors: Cardone, Di Martino
      First page: 128
      Abstract: The bi-dimensional F1-Transform was applied in image analysis to improve the performances of the F-transform method; however, due to its high computational complexity, the multidimensional F1-transform cannot be used in data analysis problems, especially in the presence of a large number of features. In this research, we proposed a new classification method based on the multidimensional F1-Transform in which the Principal Component Analysis technique is applied to reduce the dataset size. We test our method on various well-known classification datasets, showing that it improves the performances of the F-transform classification method and of other well-known classification algorithms; furthermore, the execution times of the F1-Transform classification method is similar to the ones obtained executing F-transform and other classification algorithms.
      Citation: Algorithms
      PubDate: 2023-02-23
      DOI: 10.3390/a16030128
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 129: Apollonian Packing of Circles within
           Ellipses

    • Authors: Carlo Santini, Fabio Mangini, Fabrizio Frezza
      First page: 129
      Abstract: The purpose of a circle packing procedure is to fill up a predefined, geometrical, closed contour with a maximum finite number of circles. The subject has received considerable attention in pure and applied sciences and has proved to be highly effective in connection with many a problem in logistics and technology. The well-known Apollonian circle packing achieves the packing of an infinite number of mutually tangent smaller circles of decreasing radii, internal or tangent to the outer boundary. Algorithms are available in the literature for the packing of equal-radius circles within an ellipse for global optimization purposes. In this paper, we propose a new algorithm for the Apollonian packing of circles within an ellipse, based on fundamental numerical methods, granting suitable speed, accuracy and stability. The novelty of the proposed approach consists in its applicability to the Apollonian packing of circles within a generic, closed, convex contour, if the parametrization of its outer boundary is given.
      Citation: Algorithms
      PubDate: 2023-02-24
      DOI: 10.3390/a16030129
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 130: Asian Affective and Emotional State (A2ES)
           Dataset of ECG and PPG for Affective Computing Research

    • Authors: Nor Azlina Ab. Aziz, Tawsif K., Sharifah Noor Masidayu Sayed Ismail, Muhammad Anas Hasnul, Kamarulzaman Ab. Aziz, Siti Zainab Ibrahim, Azlan Abd. Aziz, J. Emerson Raja
      First page: 130
      Abstract: Affective computing focuses on instilling emotion awareness in machines. This area has attracted many researchers globally. However, the lack of an affective database based on physiological signals from the Asian continent has been reported. This is an important issue for ensuring inclusiveness and avoiding bias in this field. This paper introduces an emotion recognition database, the Asian Affective and Emotional State (A2ES) dataset, for affective computing research. The database comprises electrocardiogram (ECG) and photoplethysmography (PPG) recordings from 47 Asian participants of various ethnicities. The subjects were exposed to 25 carefully selected audio–visual stimuli to elicit specific targeted emotions. An analysis of the participants’ self-assessment and a list of the 25 stimuli utilised are also presented in this work. Emotion recognition systems are built using ECG and PPG data; five machine learning algorithms: support vector machine (SVM), k-nearest neighbour (KNN), naive Bayes (NB), decision tree (DT), and random forest (RF); and deep learning techniques. The performance of the systems built are presented and compared. The SVM was found to be the best learning algorithm for the ECG data, while RF was the best for the PPG data. The proposed database is available to other researchers.
      Citation: Algorithms
      PubDate: 2023-02-27
      DOI: 10.3390/a16030130
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 131: Computing the Gromov-Wasserstein Distance
           between Two Surface Meshes Using Optimal Transport

    • Authors: Patrice Koehl, Marc Delarue, Henri Orland
      First page: 131
      Abstract: The Gromov-Wasserstein (GW) formalism can be seen as a generalization of the optimal transport (OT) formalism for comparing two distributions associated with different metric spaces. It is a quadratic optimization problem and solving it usually has computational costs that can rise sharply if the problem size exceeds a few hundred points. Recently fast techniques based on entropy regularization have being developed to solve an approximation of the GW problem quickly. There are issues, however, with the numerical convergence of those regularized approximations to the true GW solution. To circumvent those issues, we introduce a novel strategy to solve the discrete GW problem using methods taken from statistical physics. We build a temperature-dependent free energy function that reflects the GW problem’s constraints. To account for possible differences of scales between the two metric spaces, we introduce a scaling factor s in the definition of the energy. From the extremum of the free energy, we derive a mapping between the two probability measures that are being compared, as well as a distance between those measures. This distance is equal to the GW distance when the temperature goes to zero. The optimal scaling factor itself is obtained by minimizing the free energy with respect to s. We illustrate our approach on the problem of comparing shapes defined by unstructured triangulations of their surfaces. We use several synthetic and “real life” datasets. We demonstrate the accuracy and automaticity of our approach in non-rigid registration of shapes. We provide numerical evidence that there is a strong correlation between the GW distances computed from low-resolution, surface-based representations of proteins and the analogous distances computed from atomistic models of the same proteins.
      Citation: Algorithms
      PubDate: 2023-02-28
      DOI: 10.3390/a16030131
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 132: An Efficient GNSS Coordinate Recognition
           Algorithm for Epidemic Management

    • Authors: Jong-Shin Chen, Chun-Ming Kuo, Ruo-Wei Hung
      First page: 132
      Abstract: Many highly contagious infectious diseases, such as COVID-19, monkeypox, chickenpox, influenza, etc., have seriously affected or currently are seriously affecting human health, economic activities, education, sports, and leisure. Many people will be infected or quarantined when an epidemic spreads in specific areas. These people whose activities must be restricted due to the epidemic are represented by targets in the article. Managing targets by using targeted areas is an effective option for slowing the spread. The Centers for Disease Control (CDC) usually determine management strategies by tracking targets in specific areas. A global navigation satellite system (GNSS) that can provide autonomous geospatial positioning of targets by using tiny electronic receivers can assist in recognition. The recognition of targets within a targeted area is a point-in-polygon (PtInPy) problem in computational geometry. Most previous methods used the method of identifying one target at a time, which made them unable to effectively deal with many targets. An earlier method was able to simultaneously recognize several targets but had the problem of the repeated recognition of the same targets. Therefore, we propose a GNSS coordinate recognition algorithm. This algorithm can efficiently recognize a large number of targets within a targeted area, which can provide assistance in epidemic management.
      Citation: Algorithms
      PubDate: 2023-03-01
      DOI: 10.3390/a16030132
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 133: Crossover Rate Sorting in Adaptive
           Differential Evolution

    • Authors: Vladimir Stanovov, Lev Kazakovtsev, Eugene Semenkin
      First page: 133
      Abstract: Differential evolution (DE) is a popular and efficient heuristic numerical optimization algorithm that has found many applications in various fields. One of the main disadvantages of DE is its sensitivity to parameter values. In this study, we investigate the effect of the previously proposed crossover rate sorting mechanism on modern versions of DE. The sorting of the crossover rates, generated by a parameter adaptation mechanism prior to applying them in the crossover operation, enables the algorithm to make smaller changes to better individuals, and larger changes to worse ones, resulting in better exploration and exploitation. The experiments in this study were performed on several modern algorithms, namely L-SHADE-RSP, NL-SHADE-RSP, NL-SHADE-LBC and L-NTADE and two benchmark suites of test problems, CEC 2017 and CEC 2022. It is shown that crossover rate sorting does not result in significant additional computational efforts, but may improve results in certain scenarios, especially for high-dimensional problems.
      Citation: Algorithms
      PubDate: 2023-03-02
      DOI: 10.3390/a16030133
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 134: Plum Tree Algorithm and Weighted
           Aggregated Ensembles for Energy Efficiency Estimation

    • Authors: Dorin Moldovan
      First page: 134
      Abstract: This article introduces a novel nature-inspired algorithm called the Plum Tree Algorithm (PTA), which has the biology of the plum trees as its main source of inspiration. The PTA was tested and validated using 24 benchmark objective functions, and it was further applied and compared to the following selection of representative state-of-the-art, nature-inspired algorithms: the Chicken Swarm Optimization (CSO) algorithm, the Particle Swarm Optimization (PSO) algorithm, the Grey Wolf Optimizer (GWO), the Cuckoo Search (CS) algorithm, the Crow Search Algorithm (CSA), and the Horse Optimization Algorithm (HOA). The results obtained with the PTA are comparable to the results obtained by using the other nature-inspired optimization algorithms. The PTA returned the best overall results for the 24 objective functions tested. This article presents the application of the PTA for weight optimization for an ensemble of four machine learning regressors, namely, the Random Forest Regressor (RFR), the Gradient Boosting Regressor (GBR), the AdaBoost Regressor (AdaBoost), and the Extra Trees Regressor (ETR), which are used for the prediction of the heating load and cooling load requirements of buildings, using the Energy Efficiency Dataset from UCI Machine Learning as experimental support. The PTA optimized ensemble-returned results such as those returned by the ensembles optimized with the GWO, the CS, and the CSA.
      Citation: Algorithms
      PubDate: 2023-03-02
      DOI: 10.3390/a16030134
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 135: Storytelling with Image Data: A Systematic
           Review and Comparative Analysis of Methods and Tools

    • Authors: Fariba Lotfi, Amin Beheshti, Helia Farhood, Matineh Pooshideh, Mansour Jamzad, Hamid Beigy
      First page: 135
      Abstract: In our digital age, data are generated constantly from public and private sources, social media platforms, and the Internet of Things. A significant portion of this information comes in the form of unstructured images and videos, such as the 95 million daily photos and videos shared on Instagram and the 136 billion images available on Google Images. Despite advances in image processing and analytics, the current state of the art lacks effective methods for discovering, linking, and comprehending image data. Consider, for instance, the images from a crime scene that hold critical information for a police investigation. Currently, no system can interactively generate a comprehensive narrative of events from the incident to the conclusion of the investigation. To address this gap in research, we have conducted a thorough systematic literature review of existing methods, from labeling and captioning to extraction, enrichment, and transforming image data into contextualized information and knowledge. Our review has led us to propose the vision of storytelling with image data, an innovative framework designed to address fundamental challenges in image data comprehension. In particular, we focus on the research problem of understanding image data in general and, specifically, curating, summarizing, linking, and presenting large amounts of image data in a digestible manner to users. In this context, storytelling serves as an appropriate metaphor, as it can capture and depict the narratives and insights locked within the relationships among data stored across different islands. Additionally, a story can be subjective and told from various perspectives, ranging from a highly abstract narrative to a highly detailed one.
      Citation: Algorithms
      PubDate: 2023-03-02
      DOI: 10.3390/a16030135
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 136: Line-Level Layout Recognition of
           Historical Documents with Background Knowledge

    • Authors: Norbert Fischer, Alexander Hartelt, Frank Puppe
      First page: 136
      Abstract: Digitization and transcription of historic documents offer new research opportunities for humanists and are the topics of many edition projects. However, manual work is still required for the main phases of layout recognition and the subsequent optical character recognition (OCR) of early printed documents. This paper describes and evaluates how deep learning approaches recognize text lines and can be extended to layout recognition using background knowledge. The evaluation was performed on five corpora of early prints from the 15th and 16th Centuries, representing a variety of layout features. While the main text with standard layouts could be recognized in the correct reading order with a precision and recall of up to 99.9%, also complex layouts were recognized at a rate as high as 90% by using background knowledge, the full potential of which was revealed if many pages of the same source were transcribed.
      Citation: Algorithms
      PubDate: 2023-03-03
      DOI: 10.3390/a16030136
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 137: Acoustic Echo Cancellation with the
           Normalized Sign-Error Least Mean Squares Algorithm and Deep Residual Echo
           Suppression

    • Authors: Eran Shachar, Israel Cohen, Baruch Berdugo
      First page: 137
      Abstract: This paper presents an echo suppression system that combines a linear acoustic echo canceller (AEC) with a deep complex convolutional recurrent network (DCCRN) for residual echo suppression. The filter taps of the AEC are adjusted in subbands by using the normalized sign-error least mean squares (NSLMS) algorithm. The NSLMS is compared with the commonly-used normalized least mean squares (NLMS), and the combination of each with the proposed deep residual echo suppression model is studied. The utilization of a pre-trained deep-learning speech denoising model as an alternative to a residual echo suppressor (RES) is also studied. The results showed that the performance of the NSLMS is superior to that of the NLMS in all settings. With the NSLMS output, the proposed RES achieved better performance than the larger pre-trained speech denoiser model. More notably, the denoiser performed considerably better on the NSLMS output than on the NLMS output, and the performance gap was greater than the respective gap when employing the RES, indicating that the residual echo in the NSLMS output was more akin to noise than speech. Therefore, when little data is available to train an RES, a pre-trained speech denoiser is a viable alternative when employing the NSLMS for the preceding linear AEC.
      Citation: Algorithms
      PubDate: 2023-03-03
      DOI: 10.3390/a16030137
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 138: Optimal Integration of D-STATCOMs in
           Radial and Meshed Distribution Networks Using a MATLAB-GAMS Interface

    • Authors: German Francisco Barreto-Parra, Brandon Cortés-Caicedo, Oscar Danilo Montoya
      First page: 138
      Abstract: This paper proposes an interconnection of the MATLAB and GAMS software interfaces, which were designed based on a master-slave methodology, to solve the mixed-integer nonlinear programming (MINLP) model problem associated with the problem regarding the optimal location and sizing of static distribution compensators (D-STATCOMs) in meshed and radial distribution networks, considering the problem of optimal reactive power flow compensation and the fact that the networks have commercial, industrial, and residential loads for a daily operation scenario. The objective of this study is to reduce the annual investment and operating costs associated with energy losses and the installation costs of D-STATCOMs. This objective function is based on the classical energy budget and the capacity constraints of the device. In the master stage, MATLAB software is used to program a discrete version of the sine-cosine algorithm (DSCA), which determines the locations where the D-STATCOMs will be installed. In the slave stage, using the BONMIN solver of the GAMS software and the known locations of the D-STATCOMs, the MINLP model representing the problem under study is solved to find the value of the objective function and the nominal power of the D-STATCOMs. To validate the effectiveness of the proposed master-slave optimizer, the 33-node IEEE test system with both radial and meshed topologies is used. With this test system, numerical comparisons were made with the exact solution of the MINLP model, using different solvers in the GAMS software, the genetic-convex strategy, and the discrete-continuous versions of the Chu and Beasley genetic algorithm and the salp swarm optimization algorithm. The numerical results show that DSCA-BONMIN achieves a global solution to the problem under study, making the proposed method an effective tool for decision-making in distribution companies.
      Citation: Algorithms
      PubDate: 2023-03-04
      DOI: 10.3390/a16030138
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 139: A Dynamic Programming Approach to
           Ecosystem Management

    • Authors: Alessandra Rosso, Ezio Venturino
      First page: 139
      Abstract: We propose a way of dealing with invasive species or pest control in agriculture. Ecosystems can be modeled via dynamical systems. For their study, it is necessary to establish their possible equilibria. Even a moderately complex system exhibits, in general, multiple steady states. Usually, they are related to each other through transcritical bifurcations, i.e., the system settles to a different equilibrium when some bifurcation parameter crosses a critical threshold. From a situation in which the pest is endemic, it is desirable to move to a pest-free point. The map of the system’s equilibria and their connections via transcritical bifurcations may indicate a path to attain the desired state. However, to force the parameters to cross the critical threshold, some human action is required, and this effort has a cost. The tools of dynamic programming allow the detection of the cheapest path to reach the desired goal. In this paper, an algorithm for the solution to this problem is illustrated.
      Citation: Algorithms
      PubDate: 2023-03-04
      DOI: 10.3390/a16030139
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 140: A Tri-Model Prediction Approach for
           COVID-19 ICU Bed Occupancy: A Case Study

    • Authors: Nikolaos Stasinos, Anestis Kousis, Vangelis Sarlis, Aristeidis Mystakidis, Dimitris Rousidis, Paraskevas Koukaras, Ioannis Kotsiopoulos, Christos Tjortjis
      First page: 140
      Abstract: The impact of COVID-19 and the pressure it exerts on health systems worldwide motivated this study, which focuses on the case of Greece. We aim to assist decision makers as well as health professionals, by estimating the short to medium term needs in Intensive Care Unit (ICU) beds. We analyse time series of confirmed cases, hospitalised patients, ICU bed occupancy, recovered patients and deaths. We employ state-of-the-art forecasting algorithms, such as ARTXP, ARIMA, SARIMAX, and Multivariate Regression models. We combine these into three forecasting models culminating to a tri-model approach in time series analysis and compare them. The results of this study show that the combination of ARIMA with SARIMAX is more accurate for the majority of the investigated regions in short term 1-week ahead predictions, while Multivariate Regression outperforms the other two models for 2-weeks ahead predictions. Finally, for the medium term 3-weeks ahead predictions the Multivariate Regression and ARIMA with SARIMAX show the best results. We report on Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), R-squared (R2), and Mean Absolute Error (MAE) values, for one-week, two-week and three-week ahead predictions for ICU bed requirements. Such timely insights offer new capabilities for efficient management of healthcare resources.
      Citation: Algorithms
      PubDate: 2023-03-04
      DOI: 10.3390/a16030140
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 141: Energy-Efficient and Real-Time Wearable
           for Wellbeing- Monitoring IoT System Based on SoC-FPGA

    • Authors: Maria Inês Frutuoso, Horácio C. Neto, Mário P. Véstias, Rui Policarpo Duarte
      First page: 141
      Abstract: Wearable devices used for personal monitoring applications have been improved over the last decades. However, these devices are limited in terms of size, processing capability and power consumption. This paper proposes an efficient hardware/software embedded system for monitoring bio-signals in real time, including a heart rate calculator using PPG and an emotion classifier from EEG. The system is suitable for outpatient clinic applications requiring data transfers to external medical staff. The proposed solution contributes with an effective alternative to the traditional approach of processing bio-signals offline by proposing a SoC-FPGA based system that is able to fully process the signals locally at the node. Two sub-systems were developed targeting a Zynq 7010 device and integrating custom hardware IP cores that accelerate the processing of the most complex tasks. The PPG sub-system implements an autocorrelation peak detection algorithm to calculate heart rate values. The EEG sub-system consists of a KNN emotion classifier of preprocessed EEG features. This work overcomes the processing limitations of microcontrollers and general-purpose units, presenting a scalable and autonomous wearable solution with high processing capability and real-time response.
      Citation: Algorithms
      PubDate: 2023-03-04
      DOI: 10.3390/a16030141
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 142: Automatic Generation of Literary Sentences
           in French

    • Authors: Luis-Gil Moreno-Jiménez, Juan-Manuel Torres-Moreno, Roseli Suzi. Wedemann
      First page: 142
      Abstract: In this paper, we describe a model for the automatic generation of literary sentences in French. Although there has been much recent effort directed towards automatic text generation in general, the generation of creative, literary sentences that is not restricted to a specific genre, which we approached in this work, is a difficult task that is not commonly treated in the scientific literature. In particular, our present model has not been previously applied to the generation of sentences in the French language. Our model was based on algorithms that we previously used to generate sentences in Spanish and Portuguese and on a new corpus, which we constructed and present here, consisting of literary texts in French, called MegaLitefr. Our automatic text generation algorithm combines language models, shallow parsing, the canned text method, and deep learning artificial neural networks. We also present a manual evaluation protocol that we propose and implemented to assess the quality of the artificial sentences generated by our algorithm, by testing if they fulfil four simple criteria. We obtained encouraging results from the evaluators for most of the desired features of our artificially generated sentences.
      Citation: Algorithms
      PubDate: 2023-03-06
      DOI: 10.3390/a16030142
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 143: Nearest Neighbours Graph Variational
           AutoEncoder

    • Authors: Lorenzo Arsini, Barbara Caccia, Andrea Ciardiello, Stefano Giagu, Carlo Mancini Terracciano
      First page: 143
      Abstract: Graphs are versatile structures for the representation of many real-world data. Deep Learning on graphs is currently able to solve a wide range of problems with excellent results. However, both the generation of graphs and the handling of large graphs still remain open challenges. This work aims to introduce techniques for generating large graphs and test the approach on a complex problem such as the calculation of dose distribution in oncological radiotherapy applications. To this end, we introduced a pooling technique (ReNN-Pool) capable of sampling nodes that are spatially uniform without computational requirements in both model training and inference. By construction, the ReNN-Pool also allows the definition of a symmetric un-pooling operation to recover the original dimensionality of the graphs. We also present a Variational AutoEncoder (VAE) for generating graphs, based on the defined pooling and un-pooling operations, which employs convolutional graph layers in both encoding and decoding phases. The performance of the model was tested on both the realistic use case of a cylindrical graph dataset for a radiotherapy application and the standard benchmark dataset sprite. Compared to other graph pooling techniques, ReNN-Pool proved to improve both performance and computational requirements.
      Citation: Algorithms
      PubDate: 2023-03-06
      DOI: 10.3390/a16030143
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 144: Expansion Lemma—Variations and
           Applications to Polynomial-Time Preprocessing

    • Authors: Ashwin Jacob, Diptapriyo Majumdar, Venkatesh Raman
      First page: 144
      Abstract: In parameterized complexity, it is well-known that a parameterized problem is fixed-parameter tractable if and only if it has a kernel—an instance equivalent to the input instance, whose size is just a function of the parameter. The size of the kernel can be exponential or worse, resulting in a quest for fixed-parameter tractable problems with polynomial-sized kernels. The developments in machinery (showing lower bounds for the sizes of the kernels) have led researchers to question what are the asymptotically optimum sizes for the kernels of fixed-parameter tractable problems. In this article, we surveyed a tool called expansion lemma that helps in reducing the size of the kernel. Its early origin was in the form of crown decomposition, i.e., to obtain the linear kernel for the Vertex Cover problem; the specific lemma was identified as the tool behind the optimal O(k2) kernel for the undirected feedback vertex set problem. Since then, several variations and extensions of the tool have been discovered. We surveyed them along with their applications in this article.
      Citation: Algorithms
      PubDate: 2023-03-06
      DOI: 10.3390/a16030144
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 145: Special Issue on Supervised and
           Unsupervised Classification Algorithms—Foreword from Guest Editors

    • Authors: Laura Antonelli, Mario Rosario Guarracino
      First page: 145
      Abstract: Supervised and unsupervised classification algorithms are the two main branches of machine learning [...]
      Citation: Algorithms
      PubDate: 2023-03-07
      DOI: 10.3390/a16030145
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 146: Transfer Learning and Analogical
           

    • Authors: Kara Combs, Hongjing Lu, Trevor J. Bihl
      First page: 146
      Abstract: Artificial intelligence and machine learning (AI/ML) research has aimed to achieve human-level performance in tasks that require understanding and decision making. Although major advances have been made, AI systems still struggle to achieve adaptive learning for generalization. One of the main approaches to generalization in ML is transfer learning, where previously learned knowledge is utilized to solve problems in a different, but related, domain. Another approach, pursued by cognitive scientists for several decades, has investigated the role of analogical reasoning in comparisons aimed at understanding human generalization ability. Analogical reasoning has yielded rich empirical findings and general theoretical principles underlying human analogical inference and generalization across distinctively different domains. Though seemingly similar, there are fundamental differences between the two approaches. To clarify differences and similarities, we review transfer learning algorithms, methods, and applications in comparison with work based on analogical inference. Transfer learning focuses on exploring feature spaces shared across domains through data vectorization while analogical inferences focus on identifying relational structure shared across domains via comparisons. Rather than treating these two learning approaches as synonymous or as independent and mutually irrelevant fields, a better understanding of how they are interconnected can guide a multidisciplinary synthesis of the two approaches.
      Citation: Algorithms
      PubDate: 2023-03-07
      DOI: 10.3390/a16030146
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 147: Detectron2 for Lesion Detection in
           Diabetic Retinopathy

    • Authors: Farheen Chincholi, Harald Koestler
      First page: 147
      Abstract: Hemorrhages in the retinal fundus are a common symptom of both diabetic retinopathy and diabetic macular edema, making their detection crucial for early diagnosis and treatment. For this task, the aim is to evaluate the performance of two pre-trained and additionally fine-tuned models from the Detectron2 model zoo, Faster R-CNN (R50-FPN) and Mask R-CNN (R50-FPN). Experiments show that the Mask R-CNN (R50-FPN) model provides highly accurate segmentation masks for each detected hemorrhage, with an accuracy of 99.34%. The Faster R-CNN (R50-FPN) model detects hemorrhages with an accuracy of 99.22%. The results of both models are compared using a publicly available image database with ground truth marked by experts. Overall, this study demonstrates that current models are valuable tools for early diagnosis and treatment of diabetic retinopathy and diabetic macular edema.
      Citation: Algorithms
      PubDate: 2023-03-07
      DOI: 10.3390/a16030147
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 148: Properties of the Quadratic Transformation
           of Dual Variables

    • Authors: Vladimir Krutikov, Elena Tovbis, Anatoly Bykov, Predrag Stanimirovic, Ekaterina Chernova, Lev Kazakovtsev
      First page: 148
      Abstract: We investigate a solution of a convex programming problem with a strongly convex objective function based on the dual approach. A dual optimization problem has constraints on the positivity of variables. We study the methods and properties of transformations of dual variables that enable us to obtain an unconstrained optimization problem. We investigate the previously known method of transforming the components of dual variables in the form of their modulus (modulus method). We show that in the case of using the modulus method, the degree of the degeneracy of the function increases as it approaches the optimal point. Taking into account the ambiguity of the gradient in the boundary regions of the sign change of the new dual function variables and the increase in the degree of the function degeneracy, we need to use relaxation subgradient methods (RSM) that are difficult to implement and that can solve non-smooth non-convex optimization problems with a high degree of elongation of level surfaces. We propose to use the transformation of the components of dual variables in the form of their square (quadratic method). We prove that the transformed dual function has a Lipschitz gradient with a quadratic method of transformation. This enables us to use efficient gradient methods to find the extremum. The above properties are confirmed by a computational experiment. With a quadratic transformation compared to a modulus transformation, it is possible to obtain a solution of the problem by relaxation subgradient methods and smooth function minimization methods (conjugate gradient method and quasi-Newtonian method) with higher accuracy and lower computational costs. The noted transformations of dual variables were used in the program module for calculating the maximum permissible emissions of enterprises (MPE) of the software package for environmental monitoring of atmospheric air (ERA-AIR).
      Citation: Algorithms
      PubDate: 2023-03-07
      DOI: 10.3390/a16030148
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 149: Electromyography Gesture Model Classifier
           for Fault-Tolerant-Embedded Devices by Means of Partial Least Square Class
           Modelling Error Correcting Output Codes (PLS-ECOC)

    • Authors: Pablo Sarabia, Alvaro Araujo, Luis Antonio Sarabia, María de la Cruz Ortiz
      First page: 149
      Abstract: Surface electromyography (sEMG) plays a crucial role in several applications, such as for prosthetic controls, human–machine interfaces (HMI), rehabilitation, and disease diagnosis. These applications are usually occurring in real-time, so the classifier tends to run on a wearable device. This edge processing paradigm imposes strict requirements on the complexity classifier. To date, research on hand gesture recognition (GR) based on sEMG uses discriminant classifiers, such as support vector machines and neural networks. These classifiers can achieve good precision; they cannot detect when an error in classification has happened. This paper proposes a novel hand gesture multiclass model based on partial least square (PLS) class modelling that uses an encoding matrix called error correcting output codes (ECOC). A dataset of eight different gestures was classified using this method where all errors were detected, proving the feasibility of PLS-ECOC as a fault-tolerant classifier. Considering the PLS-ECOC model as a classifier, its accuracy, precision, and F1 are 87.5, 91.87, and 86.34%, respectively, similar to those obtained by other authors. The strength of our work lies in the extra information provided by the PLS-ECOC that allows the application to be fault tolerant while keeping a small-size model and low complexity, making it suitable for embedded real-time classification.
      Citation: Algorithms
      PubDate: 2023-03-07
      DOI: 10.3390/a16030149
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 150: A Machine Learning Study of High
           Robustness Quantum Walk Search Algorithm with Qudit Householder Coins

    • Authors: Hristo Tonchev, Petar Danev
      First page: 150
      Abstract: In this work, the quantum random walk search algorithm with a walk coin constructed by generalized Householder reflection and phase multiplier has been studied. The coin register is one qudit with an arbitrary dimension. Monte Carlo simulations, in combination with supervised machine learning, are used to find walk coins that make the quantum algorithm more robust to deviations in the coin’s parameters. This is achieved by introducing functional dependence between these parameters. The functions that give the best performance of the algorithm are studied in detail by numerical statistical methods. A thorough comparison between our modification and an algorithm, with coins made using only Householder reflection, shows significant advantages of the former. By applying a deep neural network, we make a prediction for the parameters of an optimal coin with an arbitrary size and estimate the algorithm’s stability for such a coin.
      Citation: Algorithms
      PubDate: 2023-03-09
      DOI: 10.3390/a16030150
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 151: Modelling and Analysis of Neuro Fuzzy
           Employee Ranking System in the Public Sector

    • Authors: Konstantinos C. Giotopoulos, Dimitrios Michalopoulos, Aristeidis Karras, Christos Karras, Spyros Sioutas
      First page: 151
      Abstract: Human resource management has a significant influence on the performance of any public body. Employee classification and ranking are definitely time-consuming processes, which in many cases lead to controversial results. In addition, assessing employee efficiency through a variety of skills could lead to never-ending calculations and error-prone statistics. On the other hand, hard skill selection is proven to formulate a base for further investigation since subjectivity is not included in the performance equation. This research proposes a ranking model of employee selection based on certain criteria and attributes. The proposed prototype shows a series of results with a low error rate using ANFIS as the base methodology approach. This research was explanatory, and the population of this study consisted of employees with the majority of the sample in the wider region of Western Greece. The results showed a harmonic co-existence of the factors that proportionally affect the productivity of the employees in public service. Therefore, it provides the HR department with valuable information regarding the overall productivity of the public body, as well as significant material based on each profile separately. Therefore, efficiency was achieved through an automated time-saving procedure. The final output will enhance any personnel selection system with data extracted directly from the system, ensuring that the current method outperformed traditional approaches and secured a non-subjective procedure on employee management applied to the public sector.
      Citation: Algorithms
      PubDate: 2023-03-09
      DOI: 10.3390/a16030151
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 152: Unsupervised Transformer-Based Anomaly
           Detection in ECG Signals

    • Authors: Abrar Alamr, Abdelmonim Artoli
      First page: 152
      Abstract: Anomaly detection is one of the basic issues in data processing that addresses different problems in healthcare sensory data. Technology has made it easier to collect large and highly variant time series data; however, complex predictive analysis models are required to ensure consistency and reliability. With the rise in the size and dimensionality of collected data, deep learning techniques, such as autoencoder (AE), recurrent neural networks (RNN), and long short-term memory (LSTM), have gained more attention and are recognized as state-of-the-art anomaly detection techniques. Recently, developments in transformer-based architecture have been proposed as an improved attention-based knowledge representation scheme. We present an unsupervised transformer-based method to evaluate and detect anomalies in electrocardiogram (ECG) signals. The model architecture comprises two parts: an embedding layer and a standard transformer encoder. We introduce, implement, test, and validate our model in two well-known datasets: ECG5000 and MIT-BIH Arrhythmia. Anomalies are detected based on loss function results between real and predicted ECG time series sequences. We found that the use of a transformer encoder as an alternative model for anomaly detection enables better performance in ECG time series data. The suggested model has a remarkable ability to detect anomalies in ECG signal and outperforms deep learning approaches found in the literature on both datasets. In the ECG5000 dataset, the model can detect anomalies with 99% accuracy, 99% F1-score, 99% AUC score, 98.1% recall, and 100% precision. In the MIT-BIH Arrhythmia dataset, the model achieved an accuracy of 89.5%, F1 score of 92.3%, AUC score of 93%, recall of 98.2%, and precision of 87.1%.
      Citation: Algorithms
      PubDate: 2023-03-09
      DOI: 10.3390/a16030152
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 153: Speaker-Independent Spectral Enhancement
           for Bone-Conducted Speech

    • Authors: Liangliang Cheng, Yunfeng Dou, Jian Zhou, Huabin Wang, Liang Tao
      First page: 153
      Abstract: Because of the acoustic characteristics of bone-conducted (BC) speech, BC speech can be enhanced to better communicate in a complex environment with high noise. Existing BC speech enhancement models have weak spectral recovery capability for the high-frequency part of BC speech and have poor enhancement and robustness for the speaker-independent BC speech datasets. To improve the enhancement effect of BC speech for speaker-independent speech enhancement, we use a GANs method to establish the feature mapping between BC and air-conducted (AC) speech to recover the missing components of BC speech. In addition, the method adds the training of the spectral distance constraint model and, finally, uses the enhanced model completed by the training to reconstruct the BC speech. The experimental results show that this method is superior to the comparison methods such as CycleGAN, BLSTM, GMM, and StarGAN in terms of speaker-independent BC speech enhancement and can obtain higher subjective and objective evaluation results of enhanced BC speech.
      Citation: Algorithms
      PubDate: 2023-03-09
      DOI: 10.3390/a16030153
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 154: Fusion of CCTV Video and Spatial
           Information for Automated Crowd Congestion Monitoring in Public Urban
           Spaces

    • Authors: Vivian W. H. Wong, Kincho H. Law
      First page: 154
      Abstract: Crowd congestion is one of the main causes of modern public safety issues such as stampedes. Conventional crowd congestion monitoring using closed-circuit television (CCTV) video surveillance relies on manual observation, which is tedious and often error-prone in public urban spaces where crowds are dense, and occlusions are prominent. With the aim of managing crowded spaces safely, this study proposes a framework that combines spatial and temporal information to automatically map the trajectories of individual occupants, as well as to assist in real-time congestion monitoring and prediction. Through exploiting both features from CCTV footage and spatial information of the public space, the framework fuses raw CCTV video and floor plan information to create visual aids for crowd monitoring, as well as a sequence of crowd mobility graphs (CMGraphs) to store spatiotemporal features. This framework uses deep learning-based computer vision models, geometric transformations, and Kalman filter-based tracking algorithms to automate the retrieval of crowd congestion data, specifically the spatiotemporal distribution of individuals and the overall crowd flow. The resulting collective crowd movement data is then stored in the CMGraphs, which are designed to facilitate congestion forecasting at key exit/entry regions. We demonstrate our framework on two video data, one public from a train station dataset and the other recorded at a stadium following a crowded football game. Using both qualitative and quantitative insights from the experiments, we demonstrate that the suggested framework can be useful to help assist urban planners and infrastructure operators with the management of congestion hazards.
      Citation: Algorithms
      PubDate: 2023-03-10
      DOI: 10.3390/a16030154
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 155: A Cognitive Model for Technology Adoption

    • Authors: Fariborz Sobhanmanesh, Amin Beheshti, Nicholas Nouri, Natalia Monje Chapparo, Sandya Raj, Richard A. George
      First page: 155
      Abstract: The widespread adoption of advanced technologies, such as Artificial Intelligence (AI), Machine Learning, and Robotics, is rapidly increasing across the globe. This accelerated pace of change is drastically transforming various aspects of our lives and work, resulting in what is now known as Industry 4.0. As businesses integrate these technologies into their daily operations, it significantly impacts their work tasks and required skill sets. However, the approach to technological transformation varies depending on location, industry, and organization. However, there are no published methods that can adequately forecast the adoption of technology and its impact on society. It is essential to prepare for the future impact of Industry 4.0, and this requires policymakers and business leaders to be equipped with scientifically validated models and metrics. Data-driven scenario planning and decision-making can lead to better outcomes in every area of the business, from learning and development to technology investment. However, the current literature falls short in identifying effective and globally applicable strategies to predict the adoption rate of emerging technologies. Therefore, this paper proposes a novel parametric mathematical model for predicting the adoption rate of emerging technologies through a unique data-driven pipeline. This approach utilizes global indicators for countries to predict the technology adoption curves for each country and industry. The model is thoroughly validated, and the paper outlines highly promising evaluation results. The practical implications of this proposed approach are significant because it provides policymakers and business leaders with valuable insights for decision-making and scenario planning.
      Citation: Algorithms
      PubDate: 2023-03-10
      DOI: 10.3390/a16030155
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 156: A New Third-Order Family of Multiple
           Root-Findings Based on Exponential Fitted Curve

    • Authors: Vinay Kanwar, Alicia Cordero, Juan R. Torregrosa, Mithil Rajput, Ramandeep Behl
      First page: 156
      Abstract: In this paper, we present a new third-order family of iterative methods in order to compute the multiple roots of nonlinear equations when the multiplicity (m≥1) is known in advance. There is a plethora of third-order point-to-point methods, available in the literature; but our methods are based on geometric derivation and converge to the required zero even though derivative becomes zero or close to zero in vicinity of the required zero. We use the exponential fitted curve and tangency conditions for the development of our schemes. Well-known Chebyshev, Halley, super-Halley and Chebyshev–Halley are the special members of our schemes for m=1. Complex dynamics techniques allows us to see the relation between the element of the family of iterative schemes and the wideness of the basins of attraction of the simple and multiple roots, on quadratic polynomials. Several applied problems are considered in order to demonstrate the performance of our methods and for comparison with the existing ones. Based on the numerical outcomes, we deduce that our methods illustrate better performance over the earlier methods even though in the case of multiple roots of high multiplicity.
      Citation: Algorithms
      PubDate: 2023-03-12
      DOI: 10.3390/a16030156
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 157: Algorithm for Generating S-Boxes with
           Prescribed Differential Properties

    • Authors: Stanislav Marochok, Pavol Zajac
      First page: 157
      Abstract: Cryptographic S-boxes are vectorial Boolean functions that must fulfill strict criteria to provide security for cryptographic algorithms. There are several existing methods for generating strong cryptographic S-boxes, including stochastic search algorithms. These search algorithms typically generate random candidate Boolean functions (or permutations) that are improved during the search by examining the search space in a specific way. Here, we introduce a new type of stochastic algorithm for generating cryptographic S-boxes. We do not generate and then improve the Boolean function; instead, we build the vector of values incrementally. New values are obtained by randomized search driven by restrictions on the differential spectrum of the generated S-box. In this article, we formulate two new algorithms based on this new approach and study the better one in greater detail. We prove the correctness of the proposed algorithm and evaluate its complexity. The final part contains an experimental evaluation of the method. We show that the algorithm generates S-boxes with better properties than a random search. We believe that our approach can be extended in the future by adopting more advanced stochastic search methods.
      Citation: Algorithms
      PubDate: 2023-03-13
      DOI: 10.3390/a16030157
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 158: Learning Distributed Representations and
           Deep Embedded Clustering of Texts

    • Authors: Shuang Wang, Amin Beheshti, Yufei Wang, Jianchao Lu, Quan Z. Sheng, Stephen Elbourn, Hamid Alinejad-Rokny
      First page: 158
      Abstract: Instructors face significant time and effort constraints when grading students’ assessments on a large scale. Clustering similar assessments is a unique and effective technique that has the potential to significantly reduce the workload of instructors in online and large-scale learning environments. By grouping together similar assessments, marking one assessment in a cluster can be scaled to other similar assessments, allowing for a more efficient and streamlined grading process. To address this issue, this paper focuses on text assessments and proposes a method for reducing the workload of instructors by clustering similar assessments. The proposed method involves the use of distributed representation to transform texts into vectors, and contrastive learning to improve the representation that distinguishes the differences among similar texts. The paper presents a general framework for clustering similar texts that includes label representation, K-means, and self-organization map algorithms, with the objective of improving clustering performance using Accuracy (ACC) and Normalized Mutual Information (NMI) metrics. The proposed framework is evaluated experimentally using two real datasets. The results show that self-organization maps and K-means algorithms with Pre-trained language models outperform label representation algorithms for different datasets.
      Citation: Algorithms
      PubDate: 2023-03-13
      DOI: 10.3390/a16030158
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 159: ASP-Based Declarative Reasoning in
           Data-Intensive Enterprise and IoT Applications

    • Authors: Francesco Calimeri, Nicola Leone, Giovanni Melissari, Francesco Pacenza, Simona Perri, Kristian Reale, Francesco Ricca, Jessica Zangari
      First page: 159
      Abstract: In the last few years, we have witnessed the spread of computing devices getting smaller and smaller (e.g., Smartphones, Smart Devices, Raspberry, etc.), and the production and availability of data getting bigger and bigger. This work presents DLV-EE, a framework based on Answer Set Programming (ASP) for performing declarative reasoning tasks over data-intensive, distributed applications. It relies on the DLV2 system and it features interoperability means for dealing with Big-Data over modern industry-level databases (relational and NoSQL). Furthermore, the work introduces DLV-IoT, an ASP system compatible with “mobile” technologies for enabling advanced reasoning capabilities on smart/IoT devices; eventually, DLV-EE and DLV-IoT via some real-world applications are illustrated as well.
      Citation: Algorithms
      PubDate: 2023-03-14
      DOI: 10.3390/a16030159
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 160: Implementing Deep Convolutional Neural
           Networks for QR Code-Based Printed Source Identification

    • Authors: Min-Jen Tsai, Ya-Chu Lee, Te-Ming Chen
      First page: 160
      Abstract: QR codes (short for Quick Response codes) were originally developed for use in the automotive industry to track factory inventories and logistics, but their popularity has expanded significantly in the past few years due to the widespread applications of smartphones and mobile phone cameras. QR codes can be used for a variety of purposes, including tracking inventory, advertising, electronic ticketing, and mobile payments. Although they are convenient and widely used to store and share information, their accessibility also means they might be forged easily. Digital forensics can be used to recognize direct links of printed documents, including QR codes, which is important for the investigation of forged documents and the prosecution of forgers. The process involves using optical mechanisms to identify the relationship between source printers and the duplicates. Techniques regarding computer vision and machine learning, such as convolutional neural networks (CNNs), can be implemented to study and summarize statistical features in order to improve identification accuracy. This study implemented AlexNet, DenseNet201, GoogleNet, MobileNetv2, ResNet, VGG16, and other Pretrained CNN models for evaluating their abilities to predict the source printer of QR codes with a high level of accuracy. Among them, the customized CNN model demonstrated better results in identifying printed sources of grayscale and color QR codes with less computational power and training time.
      Citation: Algorithms
      PubDate: 2023-03-14
      DOI: 10.3390/a16030160
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 161: Human Body Shapes Anomaly Detection and
           Classification Using Persistent Homology

    • Authors: Steve de Rose, Philippe Meyer, Frédéric Bertrand
      First page: 161
      Abstract: Accurate sizing systems of a population permit the minimization of the production costs of the textile apparel industry and allow firms to satisfy their customers. Hence, information about human body shapes needs to be extracted in order to examine, compare and classify human morphologies. In this paper, we use topological data analysis to study human body shapes. Persistence theory applied to anthropometric point clouds together with clustering algorithms show that relevant information about shapes is extracted by persistent homology. In particular, the homologies of human body points have interesting interpretations in terms of human anatomy. In the first place, anomalies of scans are detected using complete-linkage hierarchical clusterings. Then, a discrimination index shows which type of clustering separates gender accurately and if it is worth restricting to body trunks or not. Finally, Ward-linkage hierarchical clusterings with Davies–Bouldin, Dunn and Silhouette indices are used to define eight male morphotypes and seven female morphotypes, which are different in terms of weight classes and ratios between bust, waist and hip circumferences. The techniques used in this work permit us to classify human bodies and detect scan anomalies directly on the full human body point clouds rather than the usual methods involving the extraction of body measurements from individuals or their scans.
      Citation: Algorithms
      PubDate: 2023-03-15
      DOI: 10.3390/a16030161
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 162: Multi-Objective Decision-Making Meets
           Dynamic Shortest Path: Challenges and Prospects

    • Authors: Juarez Machado da Silva, Gabriel de Oliveira Ramos, Jorge Luis Victória Barbosa
      First page: 162
      Abstract: The Shortest Path (SP) problem resembles a variety of real-world situations where one needs to find paths between origins and destinations. A generalization of the SP is the Dynamic Shortest Path (DSP) problem, which also models changes in the graph at any time. When a graph changes, DSP algorithms partially recompute the paths while taking advantage of the previous computations. Although the DSP problem represents many real situations, it leaves out some fundamental aspects of decision-making. One of these aspects is the existence of multiple, potentially conflicting objectives that must be optimized simultaneously. Recently, we performed a first incursion on the so-called Multi-Objective Dynamic Shortest Path (MODSP), presenting the first algorithm able to take the MODM perspective into account when solving a DSP problem. In this paper, we go beyond and formally define the MODSP problem, thus establishing and clarifying it with respect to its simpler counterparts. In particular, we start with a brief overview of the related literature and then present a complete formalization of the MODSP problem class, highlighting its distinguishing features as compared to similar problems and representing their relationship through a novel taxonomy. This work also motivates the relevance of the MODSP problem by enumerating real-world scenarios that involve all its ingredients, such as multiple objectives and dynamically updated graph topologies. Finally, we discuss the challenges and open questions for this new class of shortest path problems, aiming at future work directions. We hope this work sheds light on the theme and contributes to leveraging relevant research on the topic.
      Citation: Algorithms
      PubDate: 2023-03-16
      DOI: 10.3390/a16030162
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 163: Convergence and Stability of a New
           Parametric Class of Iterative Processes for Nonlinear Systems

    • Authors: Alicia Cordero, Javier G. Maimó, Antmel Rodríguez-Cabral, Juan R. Torregrosa
      First page: 163
      Abstract: In this manuscript, we carry out a study on the generalization of a known family of multipoint scalar iterative processes for approximating the solutions of nonlinear systems. The convergence analysis of the proposed class under various smooth conditions is provided. We also study the stability of this family, analyzing the fixed and critical points of the rational operator resulting from applying the family on low-degree polynomials, as well as the basins of attraction and the orbits (periodic or not) that these points produce. This dynamical study also allows us to observe which members of the family are more stable and which have chaotic behavior. Graphical analyses of dynamical planes, parameter line and bifurcation planes are also studied. Numerical tests are performed on different nonlinear systems for checking the theoretical results and to compare the proposed schemes with other known ones.
      Citation: Algorithms
      PubDate: 2023-03-16
      DOI: 10.3390/a16030163
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 164: Comparison of Single-Lane Roundabout Entry
           Degree of Saturation Estimations from Analytical and Regression Models

    • Authors: Ana Čudina Ivančev, Maja Ahac, Saša Ahac, Vesna Dragčević
      First page: 164
      Abstract: Roundabout design is an iterative process consisting of a preliminary geometry design, geometry performance checks, and the estimation of intersection functionality (based on the results of analytical or regression models). Since both roundabout geometry design procedures and traffic characteristics vary around the world, the discussion on which functionality estimation model is more appropriate is ongoing. This research aims to reduce the uncertainty in decision-making during this final roundabout design stage. Its two objectives were to analyze and compare the results of roundabout performance estimations derived from one analytical and one regression model, and to quantify the model results’ susceptibility to changes in roundabout geometric parameters. For this, 60 four-legged single-lane roundabout schemes were created, varying in size and leg alignment. Their geometric parameters resulted from the assumption of their location in a suburban environment and chosen design vehicle swept path analysis. To compare the models’ results, the degree of saturation of roundabout entries was calculated based on presumed traffic flows. The results showed that the regression model estimates higher functionality and that this difference (both between the two models and regression models applied on different schemes) is more pronounced as the outer radius and angle between the legs increase.
      Citation: Algorithms
      PubDate: 2023-03-18
      DOI: 10.3390/a16030164
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 165: Resilience and Resilient Systems of
           Artificial Intelligence: Taxonomy, Models and Methods

    • Authors: Viacheslav Moskalenko, Vyacheslav Kharchenko, Alona Moskalenko, Borys Kuzikov
      First page: 165
      Abstract: Artificial intelligence systems are increasingly being used in industrial applications, security and military contexts, disaster response complexes, policing and justice practices, finance, and healthcare systems. However, disruptions to these systems can have negative impacts on health, mortality, human rights, and asset values. The protection of such systems from various types of destructive influences is thus a relevant area of research. The vast majority of previously published works are aimed at reducing vulnerability to certain types of disturbances or implementing certain resilience properties. At the same time, the authors either do not consider the concept of resilience as such, or their understanding varies greatly. The aim of this study is to present a systematic approach to analyzing the resilience of artificial intelligence systems, along with an analysis of relevant scientific publications. Our methodology involves the formation of a set of resilience factors, organizing and defining taxonomic and ontological relationships for resilience factors of artificial intelligence systems, and analyzing relevant resilience solutions and challenges. This study analyzes the sources of threats and methods to ensure each resilience properties for artificial intelligence systems. As a result, the potential to create a resilient artificial intelligence system by configuring the architecture and learning scenarios is confirmed. The results can serve as a roadmap for establishing technical requirements for forthcoming artificial intelligence systems, as well as a framework for assessing the resilience of already developed artificial intelligence systems.
      Citation: Algorithms
      PubDate: 2023-03-18
      DOI: 10.3390/a16030165
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 166: Framework for Evaluating Potential Causes
           of Health Risk Factors Using Average Treatment Effect and Uplift Modelling
           

    • Authors: Daniela Galatro, Rosario Trigo-Ferre, Allana Nakashook-Zettler, Vincenzo Costanzo-Alvarez, Melanie Jeffrey, Maria Jacome, Jason Bazylak, Cristina H. Amon
      First page: 166
      Abstract: Acute myeloid leukemia (AML) is a type of blood cancer that affects both adults and children. Benzene exposure has been reported to increase the risk of developing AML in children. The assessment of the potential relationship between environmental benzene exposure and childhood has been documented in the literature using odds ratios and/or risk ratios, with data fitted to unconditional logistic regression. A common feature of the studies involving relationships between environmental risk factors and health outcomes is the lack of proper analysis to evidence causation. Although statistical causal analysis is commonly used to determine causation by evaluating a distribution’s parameters, it is challenging to infer causation in complex systems from single correlation coefficients. Machine learning (ML) approaches, based on causal pattern recognition, can provide an accurate alternative to model counterfactual scenarios. In this work, we propose a framework using average treatment effect (ATE) and Uplift modeling to evidence causation when relating exposure to benzene indoors and outdoors to childhood AML, effectively predicting causation when exposed indoors to this contaminant. An analysis of the assumptions, cross-validation, sample size, and interaction between predictors are also provided, guiding future works looking at the universalization of this approach in predicting health outcomes.
      Citation: Algorithms
      PubDate: 2023-03-19
      DOI: 10.3390/a16030166
      Issue No: Vol. 16, No. 3 (2023)
       
  • Algorithms, Vol. 16, Pages 67: Cathode Shape Design for Steady-State
           Electrochemical Machining

    • Authors: Jinming Lu, Ewald A. Werner
      First page: 67
      Abstract: The inverse or cathode shape design problem of electrochemical machining (ECM) deals with the computation of the shape of the tool cathode required for producing a workpiece anode of a desired shape. This work applied the complex variable method and the continuous adjoint-based shape optimization method to solve the steady-state cathode shape design problem with anode shapes of different smoothnesses. An exact solution to the cathode shape design problem is proven to exist only in cases when the function describing the anode shape is analytic. The solution’s physical realizability is shown to depend on the aspect ratio of features on the anode surface and the width of the standard equilibrium front gap. In cases where an exact and physically realizable cathode shape exists, the continuous adjoint-based shape optimization method is shown to produce accurate numerical solutions; otherwise, the method produces cathode shapes with singularities. For the latter cases, the work demonstrates how perimeter regularization can be applied to compute smooth approximate cathode shapes suitable for producing workpieces within the range of manufacturing tolerance.
      Citation: Algorithms
      PubDate: 2023-01-19
      DOI: 10.3390/a16020067
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 68: Inverse Reinforcement Learning as the
           Algorithmic Basis for Theory of Mind: Current Methods and Open Problems

    • Authors: Jaime Ruiz-Serra, Michael S. Harré
      First page: 68
      Abstract: Theory of mind (ToM) is the psychological construct by which we model another’s internal mental states. Through ToM, we adjust our own behaviour to best suit a social context, and therefore it is essential to our everyday interactions with others. In adopting an algorithmic (rather than a psychological or neurological) approach to ToM, we gain insights into cognition that will aid us in building more accurate models for the cognitive and behavioural sciences, as well as enable artificial agents to be more proficient in social interactions as they become more embedded in our everyday lives. Inverse reinforcement learning (IRL) is a class of machine learning methods by which to infer the preferences (rewards as a function of state) of a decision maker from its behaviour (trajectories in a Markov decision process). IRL can provide a computational approach for ToM, as recently outlined by Jara-Ettinger, but this will require a better understanding of the relationship between ToM concepts and existing IRL methods at the algorthmic level. Here, we provide a review of prominent IRL algorithms and their formal descriptions, and discuss the applicability of IRL concepts as the algorithmic basis of a ToM in AI.
      Citation: Algorithms
      PubDate: 2023-01-19
      DOI: 10.3390/a16020068
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 69: RUemo—The Classification Framework
           for Russia-Ukraine War-Related Societal Emotions on Twitter through
           Machine Learning

    • Authors: Piyush Vyas, Gitika Vyas, Gaurav Dhiman
      First page: 69
      Abstract: The beginning of this decade brought utter international chaos with the COVID-19 pandemic and the Russia-Ukraine war (RUW). The ongoing war has been building pressure across the globe. People have been showcasing their opinions through different communication media, of which social media is the prime source. Consequently, it is important to analyze people’s emotions toward the RUW. This paper therefore aims to provide the framework for automatically classifying the distinct societal emotions on Twitter, utilizing the amalgamation of Emotion Robustly Optimized Bidirectional Encoder Representations from the Transformers Pre-training Approach (Emoroberta) and machine-learning (ML) techniques. This combination shows the originality of our proposed framework, i.e., Russia-Ukraine War emotions (RUemo), in the context of the RUW. We have utilized the Twitter dataset related to the RUW available on Kaggle.com. The RUemo framework can extract the 27 distinct emotions of Twitter users that are further classified by ML techniques. We have achieved 95% of testing accuracy for multilayer perceptron and logistic regression ML techniques for the multiclass emotion classification task. Our key finding indicates that:First, 81% of Twitter users in the survey show a neutral position toward RUW; second, there is evidence of social bots posting RUW-related tweets; third, other than Russia and Ukraine, users mentioned countries such as Slovakia and the USA; and fourth, the Twitter accounts of the Ukraine President and the US President are also mentioned by Twitter users. Overall, the majority of tweets describe the RUW in key terms related more to Ukraine than to Russia.
      Citation: Algorithms
      PubDate: 2023-01-20
      DOI: 10.3390/a16020069
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 70: On the Moments of the Number of Hires in
           the Assistant Hiring Algorithm

    • Authors: Leeseok Kim, José Luis Palacios
      First page: 70
      Abstract: We find closed-form expressions for the variance and the third moment of the number of hires in the assistant hiring algorithm, as well as asymptotic values for higher moments of this variable.
      Citation: Algorithms
      PubDate: 2023-01-21
      DOI: 10.3390/a16020070
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 71: Locating the Parameters of RBF Networks
           Using a Hybrid Particle Swarm Optimization Method

    • Authors: Ioannis G. Tsoulos, Vasileios Charilogis
      First page: 71
      Abstract: In the present work, an innovative two-phase method is presented for parameter tuning in radial basis function artificial neural networks. These kinds of machine learning models find application in many scientific fields in classification problems or in function regression. In the first phase, a technique based on particle swarm optimization is performed to locate a promising interval of values for the network parameters. Particle swarm optimization was used as it is a highly reliable method for global optimization problems, and in addition, it is one of the fastest and most-flexible techniques of its class. In the second phase, the network was trained within the optimal interval using a global optimization technique such as a genetic algorithm. Furthermore, in order to speed up the training of the network and due to the use of a two-stage method, parallel programming techniques were utilized. The new method was applied to a number of famous classification and regression datasets, and the results were more than promising.
      Citation: Algorithms
      PubDate: 2023-01-21
      DOI: 10.3390/a16020071
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 72: Modeling Firm Search and Innovation
           Trajectory Using Swarm Intelligence

    • Authors: Ren-Raw Chen, Cameron D. Miller, Puay Khoon Toh
      First page: 72
      Abstract: We developed a swarm intelligence-based model to study firm search across innovation topics. Firm search modeling has primarily been “firm-centric,” emphasizing the firm’s own prior performance. Fields interested in firm search behavior—strategic management, organization science, and economics—lack a suitable simulation model to incorporate a more robust set of influences, such as the influence of competitors. We developed a swarm intelligence-based simulation model to fill this gap. To demonstrate how to fit the model to real world data, we applied latent Dirichlet allocation to patent abstracts to derive a topic search space and then provide equations to calibrate the model’s parameters. We are the first to develop a swarm intelligence-based application to study firm search and innovation. The model and data methodology can be extended to address a number of questions related to firm search and competitive dynamics.
      Citation: Algorithms
      PubDate: 2023-01-22
      DOI: 10.3390/a16020072
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 73: Low-Order Electrochemical State Estimation
           for Li-Ion Batteries

    • Authors: Higuatzi Moreno, Alexander Schaum
      First page: 73
      Abstract: Batteries are complex systems involving spatially distributed microscopic mechanisms on different time scales whose adequate interplay is essential to ensure a desired functioning. Describing these phenomena yields nonlinearly coupled partial differential equations whose numerical solution requires considerable effort and computation time, making it an infeasible solution for real-time applications. Anyway, having information about the internal electrochemical states of the battery can pave the way for many different advanced monitoring and control strategies with a big potential for improving efficiency and longevity. For such purposes, in the present paper, a combination of a low-order representation of the essential dynamics associated to the internal electrochemical mechanisms based on Dynamic Mode Decomposition for control (DMDc) is proposed to obtain an improved equivalent circuit model (ECM) representation with continuously updated parameters and combined with an extended Kalman Filter (EKF). The model-order reduction step extensively exploits the model structure, yielding a well structured low-order representation without artificial numerical correlations. The performance of the proposed method is illustrated with numerical simulations based on a well-established reference model, showing its potential usefulness in real-time applications requiring knowledge of the internal electrochemical states besides the state-of-charge.
      Citation: Algorithms
      PubDate: 2023-01-28
      DOI: 10.3390/a16020073
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 74: CUDA and OpenMp Implementation of Boolean
           Matrix Product with Applications in Visual SLAM

    • Authors: Amir Zarringhalam, Saeed Shiry Ghidary, Ali Mohades, Seyed-Ali Sadegh-Zadeh
      First page: 74
      Abstract: In this paper, the concept of ultrametric structure is intertwined with the SLAM procedure. A set of pre-existing transformations has been used to create a new simultaneous localization and mapping (SLAM) algorithm. We have developed two new parallel algorithms that implement the time-consuming Boolean transformations of the space dissimilarity matrix. The resulting matrix is an important input to the vector quantization (VQ) step in SLAM processes. These algorithms, written in Compute Unified Device Architecture (CUDA) and Open Multi-Processing (OpenMP) pseudo-codes, make the Boolean transformation computationally feasible on a real-world-size dataset. We expect our newly introduced SLAM algorithm, ultrametric Fast Appearance Based Mapping (FABMAP), to outperform regular FABMAP2 since ultrametric spaces are more clusterable than regular Euclidean spaces. Another scope of the presented research is the development of a novel measure of ultrametricity, along with creation of Ultrametric-PAM clustering algorithm. Since current measures have computational time complexity order, O(n3) a new measure with lower time complexity, O(n2), has a potential significance.
      Citation: Algorithms
      PubDate: 2023-01-29
      DOI: 10.3390/a16020074
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 75: Intrusion Detection for Electric Vehicle
           Charging Systems (EVCS)

    • Authors: Mohamed ElKashlan, Heba Aslan, Mahmoud Said Elsayed, Anca D. Jurcut, Marianne A. Azer
      First page: 75
      Abstract: The market for Electric Vehicles (EVs) has expanded tremendously as seen in the recent Conference of the Parties 27 (COP27) held at Sharm El Sheikh, Egypt in November 2022. This needs the creation of an ecosystem that is user-friendly and secure. Internet-connected Electric Vehicle Charging Stations (EVCSs) provide a rich user experience and add-on services. Eventually, the EVCSs are connected to a management system, which is the Electric Vehicle Charging Station Management System (EVCSMS). Attacking the EVCS ecosystem remotely via cyberattacks is rising at the same rate as physical attacks and vandalism happening on the physical EVCSs. The cyberattack is more severe than the physical attack as it may affect thousands of EVCSs at the same time. Intrusion Detection is vital in defending against diverse types of attacks and unauthorized activities. Fundamentally, the Intrusion Detection System’s (IDS) problem is a classification problem. The IDS tries to determine if each traffic stream is legitimate or malicious, that is, binary classification. Furthermore, the IDS can identify the type of malicious traffic, which is called multiclass classification. In this paper, we address IoT security issues in EVCS by using different machine learning techniques and using the native IoT dataset to discover fraudulent traffic in EVCSs, which has not been performed in any previous research. We also compare different machine learning classifier algorithms for detecting Distributed Denial of Service (DDoS) attacks in the EVCS network environment. A typical Internet of Things (IoT) dataset obtained from actual IoT traffic is used in the paper. We compare classification algorithms that are placed in line with the traffic and contain DDoS attacks targeting the EVCS network. The results obtained from this research improve the stability of the EVCS system and significantly reduce the number of cyberattacks that could disrupt the daily life activities associated with the EVCS ecosystem.
      Citation: Algorithms
      PubDate: 2023-01-31
      DOI: 10.3390/a16020075
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 76: Machine Learning for Early Outcome
           Prediction in Septic Patients in the Emergency Department

    • Authors: Massimiliano Greco, Pier Francesco Caruso, Sofia Spano, Gianluigi Citterio, Antonio Desai, Alberto Molteni, Romina Aceto, Elena Costantini, Antonio Voza, Maurizio Cecconi
      First page: 76
      Abstract: Background: Sepsis is one of the major causes of in-hospital death, and is frequent in patients presenting to the emergency department (ED). Early identification of high-risk septic patients is critical. Machine learning (ML) techniques have been proposed for identification and prognostication of ED septic patients, but these models often lack pre-hospital data and lack validation against early sepsis identification scores (such as qSOFA) and scores for critically ill patients (SOFA, APACHE II). Methods We conducted an electronic health record (EHR) study to test whether interpretable and scalable ML models predict mortality in septic ED patients and compared their performance with clinical scores. Consecutive adult septic patients admitted to ED over 18 months were included. We built ML models, ranging from a simple-classifier model, to unbalanced and balanced logistic regression, and random forest, and compared their performance to qSOFA, SOFA, and APACHE II scores. Results: We included 425 sepsis patients after screening 38,500 EHR for sepsis criteria. Overall mortality was 15.2% and peaked in patients coming from retirement homes (38%). Random forest, like balanced (0.811) and unbalanced logistic regression (0.863), identified patients at risk of mortality (0.813). All ML models outperformed qSOFA, APACHE II, and SOFA scores. Age, mean arterial pressure, and serum sodium were major mortality predictors. Conclusions: We confirmed that random forest models outperform previous models, including qSOFA, SOFA, and APACHE II, in identifying septic patients at higher mortality risk, while maintaining good interpretability. Machine learning models may gain further adoption in the future with increasing diffusion and granularity of EHR data, yielding the advantage of increased scalability compared to standard statistical techniques.
      Citation: Algorithms
      PubDate: 2023-02-01
      DOI: 10.3390/a16020076
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 77: Transfer Learning Approach for Human
           Activity Recognition Based on Continuous Wavelet Transform

    • Authors: Olena Pavliuk, Myroslav Mishchuk, Christine Strauss
      First page: 77
      Abstract: Over the last few years, human activity recognition (HAR) has drawn increasing interest from the scientific community. This attention is mainly attributable to the proliferation of wearable sensors and the expanding role of HAR in such fields as healthcare, sports, and human activity monitoring. Convolutional neural networks (CNN) are becoming a popular approach for addressing HAR problems. However, this method requires extensive training datasets to perform adequately on new data. This paper proposes a novel deep learning model pre-trained on scalograms generated using the continuous wavelet transform (CWT). Nine popular CNN architectures and different CWT configurations were considered to select the best performing combination, resulting in the training and evaluation of more than 300 deep learning models. On the source KU-HAR dataset, the selected model achieved classification accuracy and an F1 score of 97.48% and 97.52%, respectively, which outperformed contemporary state-of-the-art works where this dataset was employed. On the target UCI-HAPT dataset, the proposed model resulted in a maximum accuracy and F1-score increase of 0.21% and 0.33%, respectively, on the whole UCI-HAPT dataset and of 2.82% and 2.89%, respectively, on the UCI-HAPT subset. It was concluded that the usage of the proposed model, particularly with frozen layers, results in improved performance, faster training, and smoother gradient descent on small HAR datasets. However, the use of the pre-trained model on sufficiently large datasets may lead to negative transfer and accuracy degradation.
      Citation: Algorithms
      PubDate: 2023-02-01
      DOI: 10.3390/a16020077
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 78: Performance Evaluation of NoSQL Document
           Databases: Couchbase, CouchDB, and MongoDB

    • Authors: Inês Carvalho, Filipe Sá, Jorge Bernardino
      First page: 78
      Abstract: NoSQL document databases emerged as an alternative to relational databases for managing large volumes of data. NoSQL document databases ensure big data storage and good query performance and are essential when the data scheme does not fit into the scheme of relational databases. They store their data in the form of documents and can handle unstructured, semi-structured, and structured data. This work evaluates the top three open-source NoSQL document databases: Couchbase, CouchDB, and MongoDB with Yahoo! Cloud Serving Benchmark (YCSB), which has become a standard for NoSQL database evaluation. The performance and scale-up of document databases are assessed using YCSB workloads with a different number of records and threads, where the runtime is measured for each database. In the experimental evaluation, we concluded that MongoDB is the database with the best runtime, except for the workload composed by scan operations. In addition, we identified CouchDB as the database with the best scale-up when varying the number of threads.
      Citation: Algorithms
      PubDate: 2023-02-01
      DOI: 10.3390/a16020078
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 79: Egyptian Hieroglyphs Segmentation with
           Convolutional Neural Networks

    • Authors: Tommaso Guidi, Lorenzo Python, Matteo Forasassi, Costanza Cucci, Massimiliano Franci, Fabrizio Argenti, Andrea Barucci
      First page: 79
      Abstract: The objective of this work is to show the application of a Deep Learning algorithm able to operate the segmentation of ancient Egyptian hieroglyphs present in an image, with the ambition to be as versatile as possible despite the variability of the image source. The problem is quite complex, the main obstacles being the considerable amount of different classes of existing hieroglyphs, the differences related to the hand of the scribe as well as the great differences among the various supports, such as papyri, stone or wood, where they are written. Furthermore, as in all archaeological finds, damage to the supports are frequent, with the consequence that hieroglyphs can be partially corrupted. In order to face this challenging problem, we leverage on the well-known Detectron2 platform, developed by the Facebook AI Research Group, focusing on the Mask R-CNN architecture to perform segmentation of image instances. Likewise, for several machine learning studies, one of the hardest challenges is the creation of a suitable dataset. In this paper, we will describe a hieroglyph dataset that has been created for the purpose of segmentation, highlighting its pros and cons, and the impact of different hyperparameters on the final results. Tests on the segmentation of images taken from public databases will also be presented and discussed along with the limitations of our study.
      Citation: Algorithms
      PubDate: 2023-02-01
      DOI: 10.3390/a16020079
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 80: Parallel Algorithm for Connected-Component
           Analysis Using CUDA

    • Authors: Dominic Windisch, Christian Kaever, Guido Juckeland, André Bieberle
      First page: 80
      Abstract: In this article, we introduce a parallel algorithm for connected-component analysis (CCA) on GPUs which drastically reduces the volume of data to transfer from GPU to the host. CCA algorithms targeting GPUs typically store the extracted features in arrays large enough to potentially hold the maximum possible number of objects for the given image size. Transferring these large arrays to the host requires large portions of the overall execution time. Therefore, we propose an algorithm which uses a CUDA kernel to merge trees of connected component feature structs. During the tree merging, various connected-component properties, such as total area, centroid and bounding box, are extracted and accumulated. The tree structure then enables us to only transfer features of valid objects to the host for further processing or storing. Our benchmarks show that this implementation significantly reduces memory transfer volume for processing results on the host whilst maintaining similar performance to state-of-the-art CCA algorithms.
      Citation: Algorithms
      PubDate: 2023-02-01
      DOI: 10.3390/a16020080
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 81: A Data Analysis Smart System for the
           Optimal Deployment of Nanosensors in the Context of an eHealth Application
           

    • Authors: Alberto Lopez, Jose Aguilar
      First page: 81
      Abstract: This paper presents the utilization of the Data Analysis Smart System (DASS) of ARMNANO in a nanotechnology application in electronic health. We made a special approach to the liver situation for patients that have been monitored with respect to two variables concerning their liver status: the Mean Corpuscular Volume (MCV) and the Alkaline phosphotas (ALKPHOS). These variables are analyzed using the autonomous cycle “Conditioning Thinking Mode” (CTM), one of the two autonomic cycles of data analysis tasks that make up the DASS. In this sense, an optimization problem is defined to determine the optimal deployment of nanosensors (NSs) for the proper determination of liver status. The application of genetic algorithms (GA) allows us to find the optimal number of NSs in the system to precisely determine the liver status, avoiding a large data volume. In total, we evaluated its implementation in two case studies and carried out a hyperparameterization process for assuring the definition of the key parameters. The greatest propensity is to place NSs in the regions close to the liver, becoming saturated as the amount of SNs increases (they do not improve the quality of the liver status value).
      Citation: Algorithms
      PubDate: 2023-02-02
      DOI: 10.3390/a16020081
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 82: Process Mining IPTV Customer Eye Gaze
           Movement Using Discrete-Time Markov Chains

    • Authors: Zhi Chen, Shuai Zhang, Sally McClean, Fionnuala Hart, Michael Milliken, Brahim Allan, Ian Kegel
      First page: 82
      Abstract: Human-Computer Interaction (HCI) research has extensively employed eye-tracking technologies in a variety of fields. Meanwhile, the ongoing development of Internet Protocol TV (IPTV) has significantly enriched the TV customer experience, which is of great interest to researchers across academia and industry. A previous study was carried out at the BT Ireland Innovation Centre (BTIIC), where an eye tracker was employed to record user interactions with a Video-on-Demand (VoD) application, the BT Player. This paper is a complementary and subsequent study of the analysis of eye-tracking data in our previously published introductory paper. Here, we propose a method for integrating layout information from the BT Player with mining the process of customer eye movement on the screen, thereby generating HCI and Industry-relevant insights regarding user experience. We incorporate a popular Machine Learning model, a discrete-time Markov Chain (DTMC), into our methodology, as the eye tracker records each gaze movement at a particular frequency, which is a good example of discrete-time sequences. The Markov Model is found suitable for our study, and it helps to reveal characteristics of the gaze movement as well as the user interface (UI) design on the VoD application by interpreting transition matrices, first passage time, proposed ‘most likely trajectory’ and other Markov properties of the model. Additionally, the study has revealed numerous promising areas for future research. And the code involved in this study is open access on GitHub.
      Citation: Algorithms
      PubDate: 2023-02-02
      DOI: 10.3390/a16020082
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 83: Mathematical Modeling of Capillary Drawing
           Stability for Hollow Optical Fibers

    • Authors: Vladimir Pervadchuk, Daria Vladimirova, Anna Derevyankina
      First page: 83
      Abstract: The stability problem solution of the manufacturing (drawing) of the quartz capillaries (pipes) for microstructured optical fibers (hole-assisted fiber) is important for determining the effective technological production modes. This importance is also caused by the high cost of fiber production and strict requirements for the accuracy of the fiber’s geometric characteristics. Therefore, a theoretical approach to this problem is relevant and necessary. A modified capillary drawing model that takes into account inertial, viscous, and surface tension forces, as well as all types of heat transfer is proposed in the research. Within the framework of the linear theory of stability, a mathematical model of isothermal and nonisothermal capillary drawing has been developed. The stability of the process is studied depending on the drawing ratio and the Reynolds number. The analysis of the sensitivity of the process to perturbations in the boundary conditions is carried out. The secondary flow that occurs upon transition to the region of instability is also studied. It has been found that at draw ratios above critical values (instability region), undamped oscillations arise. The existence of optimal parameters of the heating element is shown: temperature distribution over the furnace surface and furnace radius, at which the stability of the process of drawing quartz tubes increases significantly (several times).
      Citation: Algorithms
      PubDate: 2023-02-02
      DOI: 10.3390/a16020083
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 84: About the Performance of a Calculus-Based
           Approach to Building Model Functions in a Derivative-Free Trust-Region
           Algorithm

    • Authors: Warren Hare, Gabriel Jarry-Bolduc
      First page: 84
      Abstract: This paper examines a calculus-based approach to building model functions in a derivative-free algorithm. This calculus-based approach can be used when the objective function considered is defined via more than one blackbox. Two versions of a derivative-free trust-region method are implemented. The first version builds model functions by using a calculus-based approach, and the second version builds model functions by directly considering the objective function. The numerical experiments demonstrate that the calculus-based approach provides better results in most situations and significantly better results in specific situations.
      Citation: Algorithms
      PubDate: 2023-02-03
      DOI: 10.3390/a16020084
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 85: Detection of Cyberattacks and Anomalies in
           Cyber-Physical Systems: Approaches, Data Sources, Evaluation

    • Authors: Olga Tushkanova, Diana Levshun, Alexander Branitskiy, Elena Fedorchenko, Evgenia Novikova, Igor Kotenko
      First page: 85
      Abstract: Cyberattacks on cyber-physical systems (CPS) can lead to severe consequences, and therefore it is extremely important to detect them at early stages. However, there are several challenges to be solved in this area; they include an ability of the security system to detect previously unknown attacks. This problem could be solved with the system behaviour analysis methods and unsupervised or semi-supervised machine learning techniques. The efficiency of the attack detection system strongly depends on the datasets used to train the machine learning models. As real-world data from CPS systems are mostly not available due to the security requirements of cyber-physical objects, there are several attempts to create such datasets; however, their completeness and validity are questionable. This paper reviews existing approaches to attack and anomaly detection in CPS, with a particular focus on datasets and evaluation metrics used to assess the efficiency of the proposed solutions. The analysis revealed that only two of the three selected datasets are suitable for solving intrusion detection tasks as soon as they are generated using real test beds; in addition, only one of the selected datasets contains both network and sensor data, making it preferable for intrusion detection. Moreover, there are different approaches to evaluate the efficiency of the machine learning techniques, that require more analysis and research. Thus, in future research, the authors aim to develop an approach to anomaly detection for CPS using the selected datasets and to conduct experiments to select the performance metrics.
      Citation: Algorithms
      PubDate: 2023-02-03
      DOI: 10.3390/a16020085
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 86: Efficient Online Engagement Analytics
           Algorithm Toolkit That Can Run on Edge

    • Authors: Saw Thiha, Jay Rajasekera
      First page: 86
      Abstract: The rapid expansion of video conferencing and remote works due to the COVID-19 pandemic has resulted in a massive volume of video data to be analyzed in order to understand the audience engagement. However, analyzing this data efficiently, particularly in real-time, poses a scalability challenge as online events can involve hundreds of people and last for hours. Existing solutions, especially open-sourced contributions, usually require dedicated and expensive hardware, and are designed as centralized cloud systems. Additionally, they may also require users to stream their video to remote servers, which raises privacy concerns. This paper introduces scalable and efficient computer vision algorithms for analyzing face orientation and eye blink in real-time on edge devices, including Android, iOS, and Raspberry Pi. An example solution is presented for proctoring online meetings, workplaces, and exams. It analyzes audiences on their own devices, thus addressing scalability and privacy issues, and runs at up to 30 fps on a Raspberry Pi. The proposed face orientation detection algorithm is extremely simple, efficient, and able to estimate the head pose in two degrees of freedom, horizontal and vertical. The proposed Eye Aspect Ratio (EAR) with simple adaptive threshold demonstrated a significant improvement in terms of false positives and overall accuracy compared to the existing constant threshold method. Additionally, the algorithms are implemented and open sourced as a toolkit with modular, cross-platform MediaPipe Calculators and Graphs so that users can easily create custom solutions for a variety of purposes and devices.
      Citation: Algorithms
      PubDate: 2023-02-06
      DOI: 10.3390/a16020086
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 87: From Iris Image to Embedded Code: System of
           Methods

    • Authors: Ivan Matveev, Ilia Safonov
      First page: 87
      Abstract: Passwords are ubiquitous in today’s world, as are forgetting and stealing them. Biometric signs are harder to steal and impossible to forget. This paper presents a complete system of methods that takes a secret key and the iris image of the owner as input and generates a public key, suitable for storing insecurely. It is impossible to obtain source data (i.e., secret key or biometric traits) from the public key without the iris image of the owner, the irises of other persons will not help. At the same time, when the iris image of the same person is presented the secret key is restored. The system has been tested on several iris image databases from public sources. It allows storing 65 bits of the secret key, with zero possibility to unlock it with the impostor’s iris and 10.4% probability to reject the owner in one attempt.
      Citation: Algorithms
      PubDate: 2023-02-06
      DOI: 10.3390/a16020087
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 88: Effective Heart Disease Prediction Using
           Machine Learning Techniques

    • Authors: Chintan M. Bhatt, Parth Patel, Tarang Ghetia, Pier Luigi Mazzeo
      First page: 88
      Abstract: The diagnosis and prognosis of cardiovascular disease are crucial medical tasks to ensure correct classification, which helps cardiologists provide proper treatment to the patient. Machine learning applications in the medical niche have increased as they can recognize patterns from data. Using machine learning to classify cardiovascular disease occurrence can help diagnosticians reduce misdiagnosis. This research develops a model that can correctly predict cardiovascular diseases to reduce the fatality caused by cardiovascular diseases. This paper proposes a method of k-modes clustering with Huang starting that can improve classification accuracy. The underlying model applies DBSCAN to remove outliers. Models such as random forest (RF), decision tree classifier (DT), multilayer perceptron (MP), and XGBoost (XGB) are used. GridSearchCV was used to hypertune the parameters of the applied model to optimize the result. The proposed model is applied to a real-world dataset of 70,000 instances from Kaggle. Models were trained on data that were split in 80:20 and achieved accuracy as follows: decision tree: 86.37% (with cross-validation) and 86.53% (without cross-validation), XGBoost: 86.87% (with cross-validation) and 87.02% (without cross-validation), random forest: 87.05% (with cross-validation) and 86.92% (without cross-validation), multilayer perceptron: 87.28% (with cross-validation) and 86.94% (without cross-validation). The proposed models have AUC (area under the curve) values: decision tree: 0.94, XGBoost: 0.95, random forest: 0.95, multilayer perceptron: 0.95. The conclusion drawn from this underlying research is that multilayer perceptron with cross-validation has outperformed all other algorithms in terms of accuracy. It achieved the highest accuracy of 87.28%.
      Citation: Algorithms
      PubDate: 2023-02-06
      DOI: 10.3390/a16020088
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 89: Rapid Guessing in Low-Stakes Assessments:
           Finding the Optimal Response Time Threshold with Random Search and Genetic
           Algorithm

    • Authors: Okan Bulut, Guher Gorgun, Tarid Wongvorachan, Bin Tan
      First page: 89
      Abstract: Rapid guessing is an aberrant response behavior that commonly occurs in low-stakes assessments with little to no formal consequences for students. Recently, the availability of response time (RT) information in computer-based assessments has motivated researchers to develop various methods to detect rapidly guessed responses systematically. These methods often require researchers to identify an RT threshold subjectively for each item that could distinguish rapid guessing behavior from solution behavior. In this study, we propose a data-driven approach based on random search and genetic algorithm to search for the optimal RT threshold within a predefined search space. We used response data from a low-stakes math assessment administered to over 5000 students in 658 schools across the United States. As we demonstrated how to use our data-driven approach, we also compared its performance with those of the existing threshold-setting methods. The results show that the proposed method could produce viable RT thresholds for detecting rapid guessing in low-stakes assessments. Moreover, compared with the other threshold-setting methods, the proposed method yielded more liberal RT thresholds, flagging a larger number of responses. Implications for practice and directions for future research were discussed.
      Citation: Algorithms
      PubDate: 2023-02-07
      DOI: 10.3390/a16020089
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 90: The Use of Correlation Features in the
           Problem of Speech Recognition

    • Authors: Andriyanov
      First page: 90
      Abstract: The problem solved in the article is connected with the increase in the efficiency of phraseological radio exchange message recognition, which sometimes takes place in conditions of increased tension for the pilot. For high-quality recognition, signal preprocessing methods are needed. The article considers new data preprocessing algorithms used to extract features from a speech message. In this case, two approaches were proposed. The first approach is building autocorrelation functions of messages based on the Fourier transform, the second one uses the idea of building autocorrelation portraits of speech signals. The proposed approaches are quite simple to implement, although they require cyclic operators, since they work with pairs of samples from the original signal. Approbation of the developed method was carried out with the problem of recognizing phraseological radio exchange messages in Russian. The algorithm with preliminary feature extraction provides a gain of 1.7% in recognition accuracy. The use of convolutional neural networks also provides an increase in recognition efficiency. The gain for autocorrelation portraits processing is about 3–4%. Quantization is used to optimize the proposed models. The algorithm’s performance increased by 2.8 times after the quantization. It was also possible to increase accuracy of recognition by 1–2% using digital signal processing algorithms. An important feature of the proposed algorithms is the possibility of generalizing them to arbitrary data with time correlation. The speech message preprocessing algorithms discussed in this article are based on classical digital signal processing algorithms. The idea of constructing autocorrelation portraits based on the time series of a signal has a novelty. At the same time, this approach ensures high recognition accuracy. However, the study also showed that all the algorithms under consideration perform quite poorly under the influence of strong noise.
      Citation: Algorithms
      PubDate: 2023-02-07
      DOI: 10.3390/a16020090
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 91: Redesigning the Wheel for Systematic
           Travelling Salesmen

    • Authors: Tilo Strutz
      First page: 91
      Abstract: This paper investigates the systematic and complete usage of k-opt permutations with k=2⋯6 in application to local optimization of symmetric two-dimensional instances up to 107 points. The proposed method utilizes several techniques for accelerating the processing, such that good tours can be achieved in limited time: candidates selection based on Delaunay triangulation, precomputation of a sparse distance matrix, two-level data structure, and parallel processing based on multithreading. The proposed approach finds good tours (excess of 0.72–8.68% over best-known tour) in a single run within 30 min for instances with more than 105 points and specifically 3.37% for the largest examined tour containing 107 points. The new method proves to be competitive with a state-of-the-art approach based on the Lin–Kernigham–Helsgaun method (LKH) when applied to clustered instances.
      Citation: Algorithms
      PubDate: 2023-02-07
      DOI: 10.3390/a16020091
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 92: A Hybrid Direct Search and Model-Based
           Derivative-Free Optimization Method with Dynamic Decision Processing and
           Application in Solid-Tank Design

    • Authors: Zhongda Huang, Andy Ogilvy, Steve Collins, Warren Hare, Michelle Hilts, Andrew Jirasek
      First page: 92
      Abstract: A derivative-free optimization (DFO) method is an optimization method that does not make use of derivative information in order to find the optimal solution. It is advantageous for solving real-world problems in which the only information available about the objective function is the output for a specific input. In this paper, we develop the framework for a DFO method called the DQL method. It is designed to be a versatile hybrid method capable of performing direct search, quadratic-model search, and line search all in the same method. We develop and test a series of different strategies within this framework. The benchmark results indicate that each of these strategies has distinct advantages and that there is no clear winner in the overall performance among efficiency and robustness. We develop the Smart DQL method by allowing the method to determine the optimal search strategies in various circumstances. The Smart DQL method is applied to a problem of solid-tank design for 3D radiation dosimetry provided by the UBCO (University of British Columbia—Okanagan) 3D Radiation Dosimetry Research Group. Given the limited evaluation budget, the Smart DQL method produces high-quality solutions.
      Citation: Algorithms
      PubDate: 2023-02-07
      DOI: 10.3390/a16020092
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 93: Tsetlin Machine for Sentiment Analysis and
           Spam Review Detection in Chinese

    • Authors: Xuanyu Zhang, Hao Zhou, Ke Yu, Xiaofei Wu, Anis Yazidi
      First page: 93
      Abstract: In Natural Language Processing (NLP), deep-learning neural networks have superior performance but pose transparency and explainability barriers, due to their black box nature, and, thus, there is lack of trustworthiness. On the other hand, classical machine learning techniques are intuitive and easy to understand but often cannot perform satisfactorily. Fortunately, many research studies have recently indicated that the newly introduced model, Tsetlin Machine (TM), has reliable performance and, at the same time, enjoys human-level interpretability by nature, which is a promising approach to trade off effectiveness and interpretability. However, nearly all of the related works so far have concentrated on the English language, while research on other languages is relatively scarce. So, we propose a novel method, based on the TM model, in which the learning process is transparent and easily-understandable for Chinese NLP tasks. Our model can learn semantic information in the Chinese language by clauses. For evaluation, we conducted experiments in two domains, namely sentiment analysis and spam review detection. The experimental results showed thatm for both domains, our method could provide higher accuracy and a higher F1 score than complex, but non-transparent, deep-learning models, such as BERT and ERINE.
      Citation: Algorithms
      PubDate: 2023-02-08
      DOI: 10.3390/a16020093
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 94: A Comparison of Different Topic Modeling
           Methods through a Real Case Study of Italian Customer Care

    • Authors: Gabriele Papadia, Massimo Pacella, Massimiliano Perrone, Vincenzo Giliberti
      First page: 94
      Abstract: The paper deals with the analysis of conversation transcriptions between customers and agents in a call center of a customer care service. The objective is to support the analysis of text transcription of human-to-human conversations, to obtain reports on customer problems and complaints, and on the way an agent has solved them. The aim is to provide customer care service with a high level of efficiency and user satisfaction. To this aim, topic modeling is considered since it facilitates insightful analysis from large documents and datasets, such as a summarization of the main topics and topic characteristics. This paper presents a performance comparison of four topic modeling algorithms: (i) Latent Dirichlet Allocation (LDA); (ii) Non-negative Matrix Factorization (NMF); (iii) Neural-ProdLDA (Neural LDA) and Contextualized Topic Models (CTM). The comparison study is based on a database containing real conversation transcriptions in Italian Natural Language. Experimental results and different topic evaluation metrics are analyzed in this paper to determine the most suitable model for the case study. The gained knowledge can be exploited by practitioners to identify the optimal strategy and to perform and evaluate topic modeling on Italian natural language transcriptions of human-to-human conversations. This work can be an asset for grounding applications of topic modeling and can be inspiring for similar case studies in the domain of customer care quality.
      Citation: Algorithms
      PubDate: 2023-02-08
      DOI: 10.3390/a16020094
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 95: Defect Detection Methods for Industrial
           Products Using Deep Learning Techniques: A Review

    • Authors: Alireza Saberironaghi, Jing Ren, Moustafa El-Gindy
      First page: 95
      Abstract: Over the last few decades, detecting surface defects has attracted significant attention as a challenging task. There are specific classes of problems that can be solved using traditional image processing techniques. However, these techniques struggle with complex textures in backgrounds, noise, and differences in lighting conditions. As a solution to this problem, deep learning has recently emerged, motivated by two main factors: accessibility to computing power and the rapid digitization of society, which enables the creation of large databases of labeled samples. This review paper aims to briefly summarize and analyze the current state of research on detecting defects using machine learning methods. First, deep learning-based detection of surface defects on industrial products is discussed from three perspectives: supervised, semi-supervised, and unsupervised. Secondly, the current research status of deep learning defect detection methods for X-ray images is discussed. Finally, we summarize the most common challenges and their potential solutions in surface defect detection, such as unbalanced sample identification, limited sample size, and real-time processing.
      Citation: Algorithms
      PubDate: 2023-02-08
      DOI: 10.3390/a16020095
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 96: Image Quality Assessment for Gibbs Ringing
           Reduction

    • Authors: Yue Wang, John J. Healy
      First page: 96
      Abstract: Gibbs ringing is an artefact that is inevitable in any imaging modality where the measurement is Fourier band-limited. It impacts the quality of the image by creating a ringing appearance around discontinuities. Many novel ways of suppressing the artefact have been proposed, including machine learning methods, but the quantitative comparisons of the results have frequently been lacking in rigour. In this paper, we examine image quality assessment metrics on three test images with different complexity. We determine six metrics which show promise for simultaneously assessing severity of Gibbs ringing and of other error such as blurring. We examined applying metrics to a region of interest around discontinuities in the image and use the metrics on the resulting region of interest. We demonstrate that the region of interest approach does not improve the performance of the metrics. Finally, we examine the effect of the error threshold parameter in two metrics. Our results will aid development of best practice in comparison of algorithms for the suppression of Gibbs ringing.
      Citation: Algorithms
      PubDate: 2023-02-09
      DOI: 10.3390/a16020096
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 97: Nemesis: Neural Mean Teacher Learning-Based
           Emotion-Centric Speaker

    • Authors: Aryan Yousefi, Kalpdrum Passi
      First page: 97
      Abstract: Image captioning is the multi-modal task of automatically describing a digital image based on its contents and their semantic relationship. This research area has gained increasing popularity over the past few years; however, most of the previous studies have been focused on purely objective content-based descriptions of the image scenes. In this study, efforts have been made to generate more engaging captions by leveraging human-like emotional responses. To achieve this task, a mean teacher learning-based method has been applied to the recently introduced ArtEmis dataset. ArtEmis is the first large-scale dataset for emotion-centric image captioning, containing 455K emotional descriptions of 80K artworks from WikiArt. This method includes a self-distillation relationship between memory-augmented language models with meshed connectivity. These language models are trained in a cross-entropy phase and then fine-tuned in a self-critical sequence training phase. According to various popular natural language processing metrics, such as BLEU, METEOR, ROUGE-L, and CIDEr, our proposed model has obtained a new state of the art on ArtEmis.
      Citation: Algorithms
      PubDate: 2023-02-09
      DOI: 10.3390/a16020097
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 98: A Novel Intelligent Method for Fault
           Diagnosis of Steam Turbines Based on T-SNE and XGBoost

    • Authors: Zhiguo Liang, Lijun Zhang, Xizhe Wang
      First page: 98
      Abstract: Since failure of steam turbines occurs frequently and can causes huge losses for thermal plants, it is important to identify a fault in advance. A novel clustering fault diagnosis method for steam turbines based on t-distribution stochastic neighborhood embedding (t-SNE) and extreme gradient boosting (XGBoost) is proposed in this paper. First, the t-SNE algorithm was used to map the high-dimensional data to the low-dimensional space; and the data clustering method of K-means was performed in the low-dimensional space to distinguish the fault data from the normal data. Then, the imbalance problem in the data was processed by the synthetic minority over-sampling technique (SMOTE) algorithm to obtain the steam turbine characteristic data set with fault labels. Finally, the XGBoost algorithm was used to solve this multi-classification problem. The data set used in this paper was derived from the time series data of a steam turbine of a thermal power plant. In the processing analysis, the method achieved the best performance with an overall accuracy of 97% and an early warning of at least two hours in advance. The experimental results show that this method can effectively evaluate the condition and provide fault warning for power plant equipment.
      Citation: Algorithms
      PubDate: 2023-02-09
      DOI: 10.3390/a16020098
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 99: Enhancing Logistic Regression Using Neural
           Networks for Classification in Actuarial Learning

    • Authors: George Tzougas, Konstantin Kutzkov
      First page: 99
      Abstract: We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks with multiple hidden layers. Furthermore, several advanced approaches were explored, including the combined actuarial neural network approach, embeddings and transfer learning. The model training was achieved by minimizing either the deviance or the cross-entropy loss functions, leading to fourteen neural network-based models in total. For illustrative purposes, logistic regression and the alternative neural network-based models we propose are employed for a binary classification exercise concerning the occurrence of at least one claim in a French motor third-party insurance portfolio. Finally, the model interpretability issue was addressed via the local interpretable model-agnostic explanations approach.
      Citation: Algorithms
      PubDate: 2023-02-09
      DOI: 10.3390/a16020099
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 100: Assembly and Production Line Designing,
           Balancing and Scheduling with Inaccurate Data: A Survey and Perspectives

    • Authors: Yuri N. Sotskov
      First page: 100
      Abstract: Assembly lines (conveyors) are traditional means of large-scale and mass-scale productions. An assembly line balancing problem is needed for optimizing the assembly process by configuring and designing an assembly line for the same or similar types of final products. This problem consists of designing the assembly line and distributing the total workload for manufacturing each unit of the fixed product to be assembled among the ordered workstations along the constructed assembly line. The assembly line balancing research is focused mainly on simple assembly line balancing problems, which are restricted by a set of conditions making a considered assembly line ideal for research. A lot of published research has been carried out in order to describe and solve (usually heuristically) more realistic generalized assembly line balancing problems. Assembly line designing, balancing and scheduling problems with not deterministic (stochastic, fuzzy or uncertain) parameters have been investigated in many published research works. This paper is about the design and optimization methods for assembly and disassembly lines. We survey the recent developments for designing, balancing and scheduling assembly (disassembly) lines. New formulations of simple assembly line balancing problems are presented in order to take into account modifications and uncertainties characterized by real assembly productions.
      Citation: Algorithms
      PubDate: 2023-02-10
      DOI: 10.3390/a16020100
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 101: Comparative Analysis of the Methods for
           Fiber Bragg Structures Spectrum Modeling

    • Authors: Timur Agliullin, Vladimir Anfinogentov, Oleg Morozov, Airat Sakhabutdinov, Bulat Valeev, Ayna Niyazgulyeva, Yagmyrguly Garovov
      First page: 101
      Abstract: The work is dedicated to a comparative analysis of the following methods for fiber Bragg grating (FBG) spectral response modeling. The Layer Sweep (LS) method, which is similar to the common layer peeling algorithm, is based on the reflectance and transmittance determination for the plane waves propagating through layered structures, which results in the solution of a system of linear equations for the transmittance and reflectance of each layer using the sweep method. Another considered method is based on the determination of transfer matrices (TM) for the FBG as a whole. Firstly, a homogeneous FBG was modeled using both methods, and the resulting reflectance spectra were compared to the one obtained via a specialized commercial software package. Secondly, modeling results of a π-phase-shifted FBG were presented and discussed. For both FBG models, the influence of the partition interval of the LS method on the simulated spectrum was studied. Based on the analysis of the simulation data, additional required modeling conditions for phase-shifted FBGs were established, which enhanced the modeling performance of the LS method.
      Citation: Algorithms
      PubDate: 2023-02-10
      DOI: 10.3390/a16020101
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 102: Metamorphic Testing of Relation Extraction
           Models

    • Authors: Yuhe Sun, Zuohua Ding, Hongyun Huang, Senhao Zou, Mingyue Jiang
      First page: 102
      Abstract: Relation extraction (RE) is a fundamental NLP task that aims to identify relations between some entities regarding a given text. RE forms the basis for many advanced NLP tasks, such as question answering and text summarization, and thus its quality is critical to the relevant downstream applications. However, evaluating the quality of RE models is non-trivial. On the one hand, obtaining ground truth labels for individual test inputs is tedious and even difficult. On the other hand, there is an increasing need to understand the characteristics of RE models in terms of various aspects. To mitigate these issues, this study proposes evaluating RE models by applying metamorphic testing (MT). A total of eight metamorphic relations (MRs) are identified based on three categories of transformation operations, namely replacement, swap, and combination. These MRs encode some expected properties of different aspects of RE. We further apply MT to three popular RE models. Our experiments reveal a large number of prediction failures in the subject RE models, confirming that MT is effective for evaluating RE models. Further analysis of the experimental results reveals the advantages and disadvantages of our subject models and also uncovers some typical issues of RE models.
      Citation: Algorithms
      PubDate: 2023-02-10
      DOI: 10.3390/a16020102
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 103: Local Convergence Analysis of a One
           Parameter Family of Simultaneous Methods with Applications to Real-World
           Problems

    • Authors: Tsonyo M. Pavkov, Valentin G. Kabadzhov, Ivan K. Ivanov, Stoil I. Ivanov
      First page: 103
      Abstract: In this paper, we provide a detailed local convergence analysis of a one-parameter family of iteration methods for the simultaneous approximation of polynomial zeros due to Ivanov (Numer. Algor. 75(4): 1193–1204, 2017). Thus, we obtain two local convergence theorems that provide sufficient conditions to guarantee the Q-cubic convergence of all members of the family. Among the other contributions, our results unify the latest such kind of results of the well known Dochev–Byrnev and Ehrlich methods. Several practical applications are further given to emphasize the advantages of the studied family of methods and to show the applicability of the theoretical results.
      Citation: Algorithms
      PubDate: 2023-02-10
      DOI: 10.3390/a16020103
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 104: Quadratic Multilinear Discriminant
           Analysis for Tensorial Data Classification

    • Authors: Cristian Minoccheri, Olivia Alge, Jonathan Gryak, Kayvan Najarian, Harm Derksen
      First page: 104
      Abstract: Over the past decades, there has been an increase of attention to adapting machine learning methods to fully exploit the higher order structure of tensorial data. One problem of great interest is tensor classification, and in particular the extension of linear discriminant analysis to the multilinear setting. We propose a novel method for multilinear discriminant analysis that is radically different from the ones considered so far, and it is the first extension to tensors of quadratic discriminant analysis. Our proposed approach uses invariant theory to extend the nearest Mahalanobis distance classifier to the higher-order setting, and to formulate a well-behaved optimization problem. We extensively test our method on a variety of synthetic data, outperforming previously proposed MDA techniques. We also show how to leverage multi-lead ECG data by constructing tensors via taut string, and use our method to classify healthy signals versus unhealthy ones; our method outperforms state-of-the-art MDA methods, especially after adding significant levels of noise to the signals. Our approach reached an AUC of 0.95(0.03) on clean signals—where the second best method reached 0.91(0.03)—and an AUC of 0.89(0.03) after adding noise to the signals (with a signal-to-noise-ratio of −30)—where the second best method reached 0.85(0.05). Our approach is fundamentally different than previous work in this direction, and proves to be faster, more stable, and more accurate on the tests we performed.
      Citation: Algorithms
      PubDate: 2023-02-11
      DOI: 10.3390/a16020104
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 105: Union Models for Model Families: Efficient
           Reasoning over Space and Time

    • Authors: Sanaa Alwidian, Daniel Amyot, Yngve Lamo
      First page: 105
      Abstract: A model family is a set of related models in a given language, with commonalities and variabilities that result from evolution of models over time and/or variation over intended usage (the spatial dimension). As the family size increases, it becomes cumbersome to analyze models individually. One solution is to represent a family using one global model that supports analysis. In this paper, we propose the concept of union model as a complete and concise representation of all members of a model family. We use graph theory to formalize a model family as a set of attributed typed graphs in which all models are typed over the same metamodel. The union model is formalized as the union of all graph elements in the family. These graph elements are annotated with their corresponding model versions and configurations. This formalization is independent from the modeling language used. We also demonstrate how union models can be used to perform reasoning tasks on model families, e.g., trend analysis and property checking. Empirical results suggest potential time-saving benefits when using union models for analysis and reasoning over a set of models all at once as opposed to separately analyzing single models one at a time.
      Citation: Algorithms
      PubDate: 2023-02-11
      DOI: 10.3390/a16020105
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 106: Special Issue on Logic-Based Artificial
           Intelligence

    • Authors: Giovanni Amendola
      First page: 106
      Abstract: Since its inception, research in the field of Artificial Intelligence (AI) has had a fundamentally logical approach; therefore, discussions have taken place to establish a way of distinguishing symbolic AI from sub-symbolic AI, basing the approach instead on the statistical approaches typical of machine learning, deep learning or Bayesian networks [...]
      Citation: Algorithms
      PubDate: 2023-02-13
      DOI: 10.3390/a16020106
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 107: Self-Sustainability Assessment for a High
           Building Based on Linear Programming and Computational Fluid Dynamics

    • Authors: Carlos Oliveira, José Baptista, Adelaide Cerveira
      First page: 107
      Abstract: With excess energy use from non-renewable sources, new energy generation solutions must be adopted to make up for this excess. In this sense, the integration of renewable energy sources in high-rise buildings reduces the need for energy from the national power grid to maximize the self-sustainability of common services. Moreover, self-consumption in low-voltage and medium-voltage networks strongly facilitates a reduction in external energy dependence. For consumers, the benefits of installing small wind turbines and energy storage systems include tax benefits and reduced electricity bills as well as a profitable system after the payback period. This paper focuses on assessing the wind potential in a high-rise building through computational fluid dynamics (CFD) simulations, quantifying the potential for wind energy production by small wind turbines (WT) at the installation site. Furthermore, a mathematical model is proposed to optimize wind energy production for a self-consumption system to minimize the total cost of energy purchased from the grid, maximizing the return on investment. The potential of a CFD-based project practice that has wide application in developing the most varied processes and equipment results in a huge reduction in the time and costs spent compared to conventional practices. Furthermore, the optimization model guarantees a significant decrease in the energy purchased at peak hours through the energy stored in energy storage systems (ESS). The results show that the efficiency of the proposed model leads to an investment amortization period of 7 years for a lifetime of 20 years.
      Citation: Algorithms
      PubDate: 2023-02-13
      DOI: 10.3390/a16020107
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 108: Algorithms in Low-Code-No-Code for
           Research Applications: A Practical Review

    • Authors: Fahim Sufi
      First page: 108
      Abstract: Algorithms have evolved from machine code to low-code-no-code (LCNC) in the past 20 years. Observing the growth of LCNC-based algorithm development, the CEO of GitHub mentioned that the future of coding is no coding at all. This paper systematically reviewed several of the recent studies using mainstream LCNC platforms to understand the area of research, the LCNC platforms used within these studies, and the features of LCNC used for solving individual research questions. We identified 23 research works using LCNC platforms, such as SetXRM, the vf-OS platform, Aure-BPM, CRISP-DM, and Microsoft Power Platform (MPP). About 61% of these existing studies resorted to MPP as their primary choice. The critical research problems solved by these research works were within the area of global news analysis, social media analysis, landslides, tornadoes, COVID-19, digitization of process, manufacturing, logistics, and software/app development. The main reasons identified for solving research problems with LCNC algorithms were as follows: (1) obtaining research data from multiple sources in complete automation; (2) generating artificial intelligence-driven insights without having to manually code them. In the course of describing this review, this paper also demonstrates a practical approach to implement a cyber-attack monitoring algorithm with the most popular LCNC platform.
      Citation: Algorithms
      PubDate: 2023-02-13
      DOI: 10.3390/a16020108
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 109: Towards a Flexible Assessment of
           Compliance with Clinical Protocols Using Fuzzy Aggregation Techniques

    • Authors: Anna Wilbik, Irene Vanderfeesten, Dennis Bergmans, Serge Heines, Oktay Turetken, Walther van Mook
      First page: 109
      Abstract: In healthcare settings, compliance with clinical protocols and medical guidelines is important to ensure high-quality, safe and effective treatment of patients. How to measure compliance and how to represent compliance information in an interpretable and actionable way is still an open challenge. In this paper, we propose new metrics for compliance assessments. For this purpose, we use two fuzzy aggregation techniques, namely the OWA operator and the Sugeno integral. The proposed measures take into consideration three factors: (i) the degree of compliance with a single activity, (ii) the degree of compliance of a patient, and (iii) the importance of the activities. The proposed measures are applied to two clinical protocols used in practice. We demonstrate that the proposed measures for compliance can further aid clinicians in assessing the aspect of protocol compliance when evaluating the effectiveness of implemented clinical protocols.
      Citation: Algorithms
      PubDate: 2023-02-13
      DOI: 10.3390/a16020109
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 110: Model Parallelism Optimization for CNN
           FPGA Accelerator

    • Authors: Jinnan Wang, Weiqin Tong, Xiaoli Zhi
      First page: 110
      Abstract: Convolutional neural networks (CNNs) have made impressive achievements in image classification and object detection. For hardware with limited resources, it is not easy to achieve CNN inference with a large number of parameters without external storage. Model parallelism is an effective way to reduce resource usage by distributing CNN inference among several devices. However, parallelizing a CNN model is not easy, because CNN models have an essentially tightly-coupled structure. In this work, we propose a novel model parallelism method to decouple the CNN structure with group convolution and a new channel shuffle procedure. Our method could eliminate inter-device synchronization while reducing the memory footprint of each device. Using the proposed model parallelism method, we designed a parallel FPGA accelerator for the classic CNN model ShuffleNet. This accelerator was further optimized with features such as aggregate read and kernel vectorization to fully exploit the hardware-level parallelism of the FPGA. We conducted experiments with ShuffleNet on two FPGA boards, each of which had an Intel Arria 10 GX1150 and 16GB DDR3 memory. The experimental results showed that when using two devices, ShuffleNet achieved a 1.42× speed increase and reduced its memory footprint by 34%, as compared to its non-parallel counterpart, while maintaining accuracy.
      Citation: Algorithms
      PubDate: 2023-02-14
      DOI: 10.3390/a16020110
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 111: Learning Data for Neural-Network-Based
           Numerical Solution of PDEs: Application to Dirichlet-to-Neumann Problems

    • Authors: Ferenc Izsák, Taki Eddine Djebbar
      First page: 111
      Abstract: We propose neural-network-based algorithms for the numerical solution of boundary-value problems for the Laplace equation. Such a numerical solution is inherently mesh-free, and in the approximation process, stochastic algorithms are employed. The chief challenge in the solution framework is to generate appropriate learning data in the absence of the solution. Our main idea was to use fundamental solutions for this purpose and make a link with the so-called method of fundamental solutions. In this way, beyond the classical boundary-value problems, Dirichlet-to-Neumann operators can also be approximated. This problem was investigated in detail. Moreover, for this complex problem, low-rank approximations were constructed. Such efficient solution algorithms can serve as a basis for computational electrical impedance tomography.
      Citation: Algorithms
      PubDate: 2023-02-14
      DOI: 10.3390/a16020111
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 112: V-SOC4AS: A Vehicle-SOC for Improving
           Automotive Security

    • Authors: Vita Santa Barletta, Danilo Caivano, Mirko De Vincentiis, Azzurra Ragone, Michele Scalera, Manuel Ángel Serrano Martín
      First page: 112
      Abstract: Integrating embedded systems into next-generation vehicles is proliferating as they increase safety, efficiency, and driving comfort. These functionalities are provided by hundreds of electronic control units (ECUs) that communicate with each other using various protocols that, if not properly designed, may be vulnerable to local or remote attacks. The paper presents a vehicle-security operation center for improving automotive security (V-SOC4AS) to enhance the detection, response, and prevention of cyber-attacks in the automotive context. The goal is to monitor in real-time each subsystem of intra-vehicle communication, that is controller area network (CAN), local interconnect network (LIN), FlexRay, media oriented systems transport (MOST), and Ethernet. Therefore, to achieve this goal, security information and event management (SIEM) was used to monitor and detect malicious attacks in intra-vehicle and inter-vehicle communications: messages transmitted between vehicle ECUs; infotainment and telematics systems, which provide passengers with entertainment capabilities and information about the vehicle system; and vehicular ports, which allow vehicles to connect to diagnostic devices, upload content of various types. As a result, this allows the automation and improvement of threat detection and incident response processes. Furthermore, the V-SOC4AS allows the classification of the received message as malicious and non-malicious and acquisition of additional information about the type of attack. Thus, this reduces the detection time and provides more support for response activities. Experimental evaluation was conducted on two state-of-the-art attacks: denial of service (DoS) and fuzzing. An open-source dataset was used to simulate the vehicles. V-SOC4AS exploits security information and event management to analyze the packets sent by a vehicle using a rule-based mechanism. If the payload contains a CAN frame attack, it is notified to the SOC analysts.
      Citation: Algorithms
      PubDate: 2023-02-14
      DOI: 10.3390/a16020112
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 113: Examination of Lemon Bruising Using
           Different CNN-Based Classifiers and Local Spectral-Spatial Hyperspectral
           Imaging

    • Authors: Razieh Pourdarbani, Sajad Sabzi, Mohsen Dehghankar, Mohammad H. Rohban, Juan I. Arribas
      First page: 113
      Abstract: The presence of bruises on fruits often indicates cell damage, which can lead to a decrease in the ability of the peel to keep oxygen away from the fruits, and as a result, oxygen breaks down cell walls and membranes damaging fruit content. When chemicals in the fruit are oxidized by enzymes such as polyphenol oxidase, the chemical reaction produces an undesirable and apparent brown color effect, among others. Early detection of bruising prevents low-quality fruit from entering the consumer market. Hereupon, the present paper aims at early identification of bruised lemon fruits using 3D-convolutional neural networks (3D-CNN) via a local spectral-spatial hyperspectral imaging technique, which takes into account adjacent image pixel information in both the frequency (wavelength) and spatial domains of a 3D-tensor hyperspectral image of input lemon fruits. A total of 70 sound lemons were picked up from orchards. First, all fruits were labeled and the hyperspectral images (wavelength range 400–1100 nm) were captured as belonging to the healthy (unbruised) class (class label 0). Next, bruising was applied to each lemon by freefall. Then, the hyperspectral images of all bruised samples were captured in a time gap of 8 (class label 1) and 16 h (class label 2) after bruising was induced, thus resulting in a 3-class ternary classification problem. Four well-known 3D-CNN model namely ResNet, ShuffleNet, DenseNet, and MobileNet were used to classify bruised lemons in Python. Results revealed that the highest classification accuracy (90.47%) was obtained by the ResNet model, followed by DenseNet (85.71%), ShuffleNet (80.95%) and MobileNet (73.80%); all over the test set. ResNet model had larger parameter sizes, but it was proven to be trained faster than other models with fewer number of free parameters. ShuffleNet and MobileNet were easier to train and they needed less storage, but they could not achieve a classification error as low as the other two counterparts.
      Citation: Algorithms
      PubDate: 2023-02-14
      DOI: 10.3390/a16020113
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 114: On-Board Decentralized Observation
           Planning for LEO Satellite Constellations

    • Authors: Bingyu Song, Yingwu Chen, Qing Yang, Yahui Zuo, Shilong Xu, Yuning Chen
      First page: 114
      Abstract: The multi-satellite on-board observation planning (MSOOP) is a variant of the multi-agent task allocation problem (MATAP). MSOOP is used to complete the observation task allocation in a fully cooperative mode to maximize the profits of the whole system. In this paper, MSOOP for LEO satellite constellations is investigated, and the decentralized algorithm is exploited for solving it. The problem description of MSOOP for LEO satellite constellations is detailed. The coupled constraints make MSOOP more complex than other task allocation problems. The improved Consensus-Based Bundle Algorithm (ICBBA), which includes a bundle construction phase and consensus check phase, is proposed. A constraint check and a mask recovery are introduced into bundle construction and consensus check to handle the coupled constraints. The fitness function is adjusted to adapt to the characteristics of different scenes. Experimental results on series instances demonstrate the effectiveness of the proposed algorithm.
      Citation: Algorithms
      PubDate: 2023-02-14
      DOI: 10.3390/a16020114
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 115: An Energy-Aware Load Balancing Method for
           IoT-Based Smart Recycling Machines Using an Artificial Chemical Reaction
           Optimization Algorithm

    • Authors: Tabaghchi Milan, Darbandi, Jafari Navimipour, Yalcın
      First page: 115
      Abstract: Recycling is very important for a sustainable and clean environment. Developed and developing countries are both facing the problem of waste management and recycling issues. On the other hand, the Internet of Things (IoT) is a famous and applicable infrastructure used to provide connection between physical devices. It is an important technology that has been researched and implemented in recent years that promises to positively influence several industries, including recycling and trash management. The impact of the IoT on recycling and waste management is examined using standard operating practices in recycling. Recycling facilities, for instance, can use IoT to manage and keep an eye on the recycling situation in various places while allocating the logistics for transportation and distribution processes to minimize recycling costs and lead times. So, companies can use historical patterns to track usage trends in their service regions, assess their accessibility to gather resources, and arrange their activities accordingly. Additionally, energy is a significant aspect of the IoT since several devices will be linked to the internet, and the devices, sensors, nodes, and objects are all energy-restricted. Because the devices are constrained by their nature, the load-balancing protocol is crucial in an IoT ecosystem. Due to the importance of this issue, this study presents an energy-aware load-balancing method for IoT-based smart recycling machines using an artificial chemical reaction optimization algorithm. The experimental results indicated that the proposed solution could achieve excellent performance. According to the obtained results, the imbalance degree (5.44%), energy consumption (11.38%), and delay time (9.05%) were reduced using the proposed method.
      Citation: Algorithms
      PubDate: 2023-02-14
      DOI: 10.3390/a16020115
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 116: PigSNIPE: Scalable Neuroimaging Processing
           Engine for Minipig MRI

    • Authors: Michal Brzus, Kevin Knoernschild, Jessica C. Sieren, Hans J. Johnson
      First page: 116
      Abstract: Translation of basic animal research to find effective methods of diagnosing and treating human neurological disorders requires parallel analysis infrastructures. Small animals such as mice provide exploratory animal disease models. However, many interventions developed using small animal models fail to translate to human use due to physical or biological differences. Recently, large-animal minipigs have emerged in neuroscience due to both their brain similarity and economic advantages. Medical image processing is a crucial part of research, as it allows researchers to monitor their experiments and understand disease development. By pairing four reinforcement learning models and five deep learning UNet segmentation models with existing algorithms, we developed PigSNIPE, a pipeline for the automated handling, processing, and analyzing of large-scale data sets of minipig MR images. PigSNIPE allows for image registration, AC-PC alignment, detection of 19 anatomical landmarks, skull stripping, brainmask and intracranial volume segmentation (DICE 0.98), tissue segmentation (DICE 0.82), and caudate-putamen brain segmentation (DICE 0.8) in under two minutes. To the best of our knowledge, this is the first automated pipeline tool aimed at large animal images, which can significantly reduce the time and resources needed for analyzing minipig neuroimages.
      Citation: Algorithms
      PubDate: 2023-02-15
      DOI: 10.3390/a16020116
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 117: Extrinsic Bayesian Optimization on
           Manifolds

    • Authors: Yihao Fang, Mu Niu, Pokman Cheung, Lizhen Lin
      First page: 117
      Abstract: We propose an extrinsic Bayesian optimization (eBO) framework for general optimization problems on manifolds. Bayesian optimization algorithms build a surrogate of the objective function by employing Gaussian processes and utilizing the uncertainty in that surrogate by deriving an acquisition function. This acquisition function represents the probability of improvement based on the kernel of the Gaussian process, which guides the search in the optimization process. The critical challenge for designing Bayesian optimization algorithms on manifolds lies in the difficulty of constructing valid covariance kernels for Gaussian processes on general manifolds. Our approach is to employ extrinsic Gaussian processes by first embedding the manifold onto some higher dimensional Euclidean space via equivariant embeddings and then constructing a valid covariance kernel on the image manifold after the embedding. This leads to efficient and scalable algorithms for optimization over complex manifolds. Simulation study and real data analyses are carried out to demonstrate the utilities of our eBO framework by applying the eBO to various optimization problems over manifolds such as the sphere, the Grassmannian, and the manifold of positive definite matrices.
      Citation: Algorithms
      PubDate: 2023-02-15
      DOI: 10.3390/a16020117
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 118: EEG Data Augmentation for Emotion
           Recognition with a Task-Driven GAN

    • Authors: Qing Liu, Jianjun Hao, Yijun Guo
      First page: 118
      Abstract: The high cost of acquiring training data in the field of emotion recognition based on electroencephalogram (EEG) is a problem, making it difficult to establish a high-precision model from EEG signals for emotion recognition tasks. Given the outstanding performance of generative adversarial networks (GANs) in data augmentation in recent years, this paper proposes a task-driven method based on CWGAN to generate high-quality artificial data. The generated data are represented as multi-channel EEG data differential entropy feature maps, and a task network (emotion classifier) is introduced to guide the generator during the adversarial training. The evaluation results show that the proposed method can generate artificial data with clearer classifications and distributions that are more similar to the real data, resulting in obvious improvements in EEG-based emotion recognition tasks.
      Citation: Algorithms
      PubDate: 2023-02-15
      DOI: 10.3390/a16020118
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 119: Periodicity Intensity Reveals Insights
           into Time Series Data: Three Use Cases

    • Authors: Alan F. Smeaton, Feiyan Hu
      First page: 119
      Abstract: Periodic phenomena are oscillating signals found in many naturally occurring time series. A periodogram can be used to measure the intensities of oscillations at different frequencies over an entire time series, but sometimes, we are interested in measuring how periodicity intensity at a specific frequency varies throughout the time series. This can be performed by calculating periodicity intensity within a window, then sliding and recalculating the intensity for the window, giving an indication of how periodicity intensity at a specific frequency changes throughout the series. We illustrate three applications of this, the first of which are the movements of a herd of new-born calves, where we show how intensity in the 24 h periodicity increases and decreases synchronously across the herd. We also show how changes in 24 h periodicity intensity of activities detected from in-home sensors can be indicative of overall wellness. We illustrate this on several weeks of sensor data gathered from each of the homes of 23 older adults. Our third application is the intensity of the 7-day periodicity of hundreds of University students accessing online resources from a virtual learning environment (VLE) and how the regularity of their weekly learning behaviours changes throughout a teaching semester. The paper demonstrates how periodicity intensity reveals insights into time series data not visible using other forms of analysis.
      Citation: Algorithms
      PubDate: 2023-02-15
      DOI: 10.3390/a16020119
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 120: Rapid Prototyping of H∞ Algorithm
           for Real-Time Displacement Volume Control of Axial Piston Pumps

    • Authors: Alexander Mitov, Tsonyo Slavov, Jordan Kralev
      First page: 120
      Abstract: A system for the rapid prototyping of real-time control algorithms for open-circuit variable displacement axial-piston pumps is presented. In order to establish real-time control, and communication and synchronization with the programmable logic controller of an axial piston pump, the custom CAN communication protocol is developed. This protocol is realized as a Simulink® S-function, which is a part of main Simulink® model. This model works in real-time and allows for the implementation of rapid prototyping of various control strategies including advanced algorithms such as H∞ control. The aim of the algorithm is to achieve control system performance in the presence of various load disturbances with an admissible control signal rate and amplitude. In contrast to conventional systems, the developed solution suggests using an embedded approach for the prototyping of various algorithms. The obtained results show the advantages of the designed H∞ controller that ensure the robustness of a closed-loop system in the presence of significant load disturbances. These type of systems with displacement volume regulation are important for industrial hydraulic drive systems with relatively high power.
      Citation: Algorithms
      PubDate: 2023-02-15
      DOI: 10.3390/a16020120
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 121: Interpretation for Variational Autoencoder
           Used to Generate Financial Synthetic Tabular Data

    • Authors: Jinhong Wu, Konstantinos Plataniotis, Lucy Liu, Ehsan Amjadian, Yuri Lawryshyn
      First page: 121
      Abstract: Synthetic data, artificially generated by computer programs, has become more widely used in the financial domain to mitigate privacy concerns. Variational Autoencoder (VAE) is one of the most popular deep-learning models for generating synthetic data. However, VAE is often considered a “black box” due to its opaqueness. Although some studies have been conducted to provide explanatory insights into VAE, research focusing on explaining how the input data could influence VAE to create synthetic data, especially for tabular data, is still lacking. However, in the financial industry, most data are stored in a tabular format. This paper proposes a sensitivity-based method to assess the impact of inputted tabular data on how VAE synthesizes data. This sensitivity-based method can provide both global and local interpretations efficiently and intuitively. To test this method, a simulated dataset and three Kaggle banking tabular datasets were employed. The results confirmed the applicability of this proposed method.
      Citation: Algorithms
      PubDate: 2023-02-16
      DOI: 10.3390/a16020121
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 122: Integral Backstepping Control Algorithm
           for a Quadrotor Positioning Flight Task: A Design Issue Discussion

    • Authors: Yang-Rui Li, Chih-Chia Chen, Chao-Chung Peng
      First page: 122
      Abstract: For quadrotor control applications, it is necessary to rely on attitude angle changes to indirectly achieve the position trajectory tracking purpose. Several existing literature studies omit the non-negligible attitude transients in the position controller design for this kind of cascade system. The result leads to the position tracking performance not being as good as expected. In fact, the transient behavior of the attitude tracking response cannot be ignored. Therefore, the closed-loop stability of the attitude loop as well as the position tracking should be considered simultaneously. In this study, the flight controller design of the position and attitude control loops is presented based on an integral backstepping control algorithm. This control algorithm relies on the derivatives of the associated virtual control laws for implementation. Examining existing literature, the derivatives of the virtual control law are realized approximated by numerical differentiations. Nevertheless, in practical scenarios, the numerical differentiations will cause the chattering phenomenon of control signals in the presence of unavoidable measurement noise. The noise-induced control signals may further cause damage to the actuators or even diverge the system response. To address this issue, the analytic form for the derivative of the virtual control law is derived. The time derivative virtual control law is analyzed and split into the disturbance-independent compensable and disturbance-dependent non-compensable terms. By utilizing the compensable term, the control chattering due to the differentiation of the noise can be avoided significantly. The simulation results reveal that the proposed control algorithm has a better position tracking performance than the traditional dual-loop control scheme. Meanwhile, a relatively smooth control signal can be obtained for a realistic control algorithm realization. Simulations are provided to illustrate the position tracking issue of a quadrotor and to demonstrate the effectiveness of the proposed compromised control scheme.
      Citation: Algorithms
      PubDate: 2023-02-16
      DOI: 10.3390/a16020122
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 123: IRONEDGE: Stream Processing Architecture
           for Edge Applications

    • Authors: João Pedro Vitorino, José Simão, Nuno Datia, Matilde Pato
      First page: 123
      Abstract: This paper presents IRONEDGE, an architectural framework that can be used in different edge Stream Processing solutions for “Smart Infrastructure” scenarios, on a case-by-case basis. The architectural framework identifies the common components that any such solution should implement and a generic processing pipeline. In particular, the framework is considered in the context of a study case regarding Internet of Things (IoT) devices to be attached to rolling stock in a railway. A lack of computation and storage resources available in edge devices and infrequent network connectivity are not often seen in the existing literature, but were considered in this paper. Two distinct implementations of IRONEDGE were considered and tested. One, identified as Apache Kafka with Kafka Connect (K0-WC), uses Kafka Connect to pass messages from MQ Telemetry Transport (MQTT) to Apache Kafka. The second scenario, identified as Apache Kafka with No Kafka Connect (K1-NC), allows Apache Storm to consume messages directly. When the data rate increased, K0-WC showed low throughput resulting from high losses, whereas K1-NC displayed an increase in throughput, but did not match the input rate for the Data Reports. The results showed that the framework can be used for defining new solutions for edge Stream Processing scenarios and identified a reference implementation for the considered study case. In future work, the authors propose to extend the evaluation of the architectural variation of K1-NC.
      Citation: Algorithms
      PubDate: 2023-02-17
      DOI: 10.3390/a16020123
      Issue No: Vol. 16, No. 2 (2023)
       
  • Algorithms, Vol. 16, Pages 124: Fourier Neural Operator Network for Fast
           Photoacoustic Wave Simulations

    • Authors: Steven Guan, Ko-Tsung Hsu, Parag V. Chitnis
      First page: 124
      Abstract: Simulation tools for photoacoustic wave propagation have played a key role in advancing photoacoustic imaging by providing quantitative and qualitative insights into parameters affecting image quality. Classical methods for numerically solving the photoacoustic wave equation rely on a fine discretization of space and can become computationally expensive for large computational grids. In this work, we applied Fourier Neural Operator (FNO) networks as a fast data-driven deep learning method for solving the 2D photoacoustic wave equation in a homogeneous medium. Comparisons between the FNO network and pseudo-spectral time domain approach were made for the forward and adjoint simulations. Results demonstrate that the FNO network generated comparable simulations with small errors and was orders of magnitude faster than the pseudo-spectral time domain methods (~26× faster on a 64 × 64 computational grid and ~15× faster on a 128 × 128 computational grid). Moreover, the FNO network was generalizable to the unseen out-of-domain test set with a root-mean-square error of 9.5 × 10−3 in Shepp–Logan, 1.5 × 10−2 in synthetic vasculature, 1.1 × 10−2 in tumor and 1.9 × 10−2 in Mason-M phantoms on a 64 × 64 computational grid and a root mean squared of 6.9 ± 5.5 × 10−3 in the AWA2 dataset on a 128 × 128 computational grid.
      Citation: Algorithms
      PubDate: 2023-02-19
      DOI: 10.3390/a16020124
      Issue No: Vol. 16, No. 2 (2023)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 34.239.152.207
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-