Subjects -> COMPUTER SCIENCE (Total: 2313 journals)
    - ANIMATION AND SIMULATION (33 journals)
    - ARTIFICIAL INTELLIGENCE (133 journals)
    - AUTOMATION AND ROBOTICS (116 journals)
    - COMPUTER ARCHITECTURE (11 journals)
    - COMPUTER ENGINEERING (12 journals)
    - COMPUTER GAMES (23 journals)
    - COMPUTER PROGRAMMING (25 journals)
    - COMPUTER SCIENCE (1305 journals)
    - COMPUTER SECURITY (59 journals)
    - DATA BASE MANAGEMENT (21 journals)
    - DATA MINING (50 journals)
    - E-BUSINESS (21 journals)
    - E-LEARNING (30 journals)
    - IMAGE AND VIDEO PROCESSING (42 journals)
    - INFORMATION SYSTEMS (109 journals)
    - INTERNET (111 journals)
    - SOCIAL WEB (61 journals)
    - SOFTWARE (43 journals)
    - THEORY OF COMPUTING (10 journals)

COMPUTER SCIENCE (1305 journals)                  1 2 3 4 5 6 7 | Last

Showing 1 - 200 of 872 Journals sorted alphabetically
3D Printing and Additive Manufacturing     Full-text available via subscription   (Followers: 27)
Abakós     Open Access   (Followers: 3)
ACM Computing Surveys     Hybrid Journal   (Followers: 29)
ACM Inroads     Full-text available via subscription   (Followers: 1)
ACM Journal of Computer Documentation     Free   (Followers: 4)
ACM Journal on Computing and Cultural Heritage     Hybrid Journal   (Followers: 5)
ACM Journal on Emerging Technologies in Computing Systems     Hybrid Journal   (Followers: 11)
ACM SIGACCESS Accessibility and Computing     Free   (Followers: 2)
ACM SIGAPP Applied Computing Review     Full-text available via subscription  
ACM SIGBioinformatics Record     Full-text available via subscription  
ACM SIGEVOlution     Full-text available via subscription  
ACM SIGHIT Record     Full-text available via subscription  
ACM SIGHPC Connect     Full-text available via subscription  
ACM SIGITE Newsletter     Open Access   (Followers: 1)
ACM SIGMIS Database: the DATABASE for Advances in Information Systems     Hybrid Journal  
ACM SIGUCCS plugged in     Full-text available via subscription  
ACM SIGWEB Newsletter     Full-text available via subscription   (Followers: 4)
ACM Transactions on Accessible Computing (TACCESS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 13)
ACM Transactions on Applied Perception (TAP)     Hybrid Journal   (Followers: 3)
ACM Transactions on Architecture and Code Optimization (TACO)     Hybrid Journal   (Followers: 9)
ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)     Hybrid Journal  
ACM Transactions on Autonomous and Adaptive Systems (TAAS)     Hybrid Journal   (Followers: 10)
ACM Transactions on Computation Theory (TOCT)     Hybrid Journal   (Followers: 11)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 5)
ACM Transactions on Computer Systems (TOCS)     Hybrid Journal   (Followers: 19)
ACM Transactions on Computer-Human Interaction     Hybrid Journal   (Followers: 15)
ACM Transactions on Computing Education (TOCE)     Hybrid Journal   (Followers: 9)
ACM Transactions on Computing for Healthcare     Hybrid Journal  
ACM Transactions on Cyber-Physical Systems (TCPS)     Hybrid Journal   (Followers: 1)
ACM Transactions on Design Automation of Electronic Systems (TODAES)     Hybrid Journal   (Followers: 5)
ACM Transactions on Economics and Computation     Hybrid Journal  
ACM Transactions on Embedded Computing Systems (TECS)     Hybrid Journal   (Followers: 4)
ACM Transactions on Information Systems (TOIS)     Hybrid Journal   (Followers: 18)
ACM Transactions on Intelligent Systems and Technology (TIST)     Hybrid Journal   (Followers: 11)
ACM Transactions on Interactive Intelligent Systems (TiiS)     Hybrid Journal   (Followers: 6)
ACM Transactions on Internet of Things     Hybrid Journal   (Followers: 2)
ACM Transactions on Modeling and Performance Evaluation of Computing Systems (ToMPECS)     Hybrid Journal  
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)     Hybrid Journal   (Followers: 10)
ACM Transactions on Parallel Computing     Full-text available via subscription  
ACM Transactions on Reconfigurable Technology and Systems (TRETS)     Hybrid Journal   (Followers: 6)
ACM Transactions on Sensor Networks (TOSN)     Hybrid Journal   (Followers: 9)
ACM Transactions on Social Computing     Hybrid Journal  
ACM Transactions on Spatial Algorithms and Systems (TSAS)     Hybrid Journal   (Followers: 1)
ACM Transactions on Speech and Language Processing (TSLP)     Hybrid Journal   (Followers: 11)
ACM Transactions on Storage     Hybrid Journal  
ACS Applied Materials & Interfaces     Hybrid Journal   (Followers: 39)
Acta Informatica Malaysia     Open Access  
Acta Universitatis Cibiniensis. Technical Series     Open Access   (Followers: 1)
Ad Hoc Networks     Hybrid Journal   (Followers: 12)
Adaptive Behavior     Hybrid Journal   (Followers: 8)
Additive Manufacturing Letters     Open Access   (Followers: 3)
Advanced Engineering Materials     Hybrid Journal   (Followers: 32)
Advanced Science Letters     Full-text available via subscription   (Followers: 9)
Advances in Adaptive Data Analysis     Hybrid Journal   (Followers: 9)
Advances in Artificial Intelligence     Open Access   (Followers: 31)
Advances in Catalysis     Full-text available via subscription   (Followers: 7)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 20)
Advances in Computer Engineering     Open Access   (Followers: 13)
Advances in Computer Science : an International Journal     Open Access   (Followers: 18)
Advances in Computing     Open Access   (Followers: 3)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 52)
Advances in Engineering Software     Hybrid Journal   (Followers: 26)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 19)
Advances in Human-Computer Interaction     Open Access   (Followers: 19)
Advances in Image and Video Processing     Open Access   (Followers: 20)
Advances in Materials Science     Open Access   (Followers: 19)
Advances in Multimedia     Open Access   (Followers: 1)
Advances in Operations Research     Open Access   (Followers: 13)
Advances in Remote Sensing     Open Access   (Followers: 59)
Advances in Science and Research (ASR)     Open Access   (Followers: 8)
Advances in Technology Innovation     Open Access   (Followers: 5)
AEU - International Journal of Electronics and Communications     Hybrid Journal   (Followers: 8)
African Journal of Information and Communication     Open Access   (Followers: 6)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 5)
AI EDAM     Hybrid Journal   (Followers: 2)
Air, Soil & Water Research     Open Access   (Followers: 6)
AIS Transactions on Human-Computer Interaction     Open Access   (Followers: 5)
Al-Qadisiyah Journal for Computer Science and Mathematics     Open Access   (Followers: 2)
AL-Rafidain Journal of Computer Sciences and Mathematics     Open Access   (Followers: 3)
Algebras and Representation Theory     Hybrid Journal  
Algorithms     Open Access   (Followers: 13)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 8)
American Journal of Computational Mathematics     Open Access   (Followers: 6)
American Journal of Information Systems     Open Access   (Followers: 4)
American Journal of Sensor Technology     Open Access   (Followers: 2)
Analog Integrated Circuits and Signal Processing     Hybrid Journal   (Followers: 15)
Animation Practice, Process & Production     Hybrid Journal   (Followers: 4)
Annals of Combinatorics     Hybrid Journal   (Followers: 3)
Annals of Data Science     Hybrid Journal   (Followers: 14)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 16)
Annals of Pure and Applied Logic     Open Access   (Followers: 4)
Annals of Software Engineering     Hybrid Journal   (Followers: 12)
Annual Reviews in Control     Hybrid Journal   (Followers: 7)
Anuario Americanista Europeo     Open Access  
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 3)
Applied and Computational Harmonic Analysis     Full-text available via subscription  
Applied Artificial Intelligence: An International Journal     Hybrid Journal   (Followers: 17)
Applied Categorical Structures     Hybrid Journal   (Followers: 4)
Applied Clinical Informatics     Hybrid Journal   (Followers: 4)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 16)
Applied Computer Systems     Open Access   (Followers: 6)
Applied Computing and Geosciences     Open Access   (Followers: 3)
Applied Mathematics and Computation     Hybrid Journal   (Followers: 31)
Applied Medical Informatics     Open Access   (Followers: 11)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 4)
Applied Soft Computing     Hybrid Journal   (Followers: 13)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 5)
Applied System Innovation     Open Access   (Followers: 1)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 4)
Archive of Numerical Software     Open Access  
Archives and Museum Informatics     Hybrid Journal   (Followers: 97)
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 5)
arq: Architectural Research Quarterly     Hybrid Journal   (Followers: 7)
Array     Open Access   (Followers: 1)
Artifact : Journal of Design Practice     Open Access   (Followers: 8)
Artificial Life     Hybrid Journal   (Followers: 7)
Asian Journal of Computer Science and Information Technology     Open Access   (Followers: 3)
Asian Journal of Control     Hybrid Journal  
Asian Journal of Research in Computer Science     Open Access   (Followers: 4)
Assembly Automation     Hybrid Journal   (Followers: 2)
Automatic Control and Computer Sciences     Hybrid Journal   (Followers: 6)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Automatica     Hybrid Journal   (Followers: 13)
Automatika : Journal for Control, Measurement, Electronics, Computing and Communications     Open Access  
Automation in Construction     Hybrid Journal   (Followers: 8)
Balkan Journal of Electrical and Computer Engineering     Open Access  
Basin Research     Hybrid Journal   (Followers: 7)
Behaviour & Information Technology     Hybrid Journal   (Followers: 32)
BenchCouncil Transactions on Benchmarks, Standards, and Evaluations     Open Access   (Followers: 4)
Big Data and Cognitive Computing     Open Access   (Followers: 5)
Big Data Mining and Analytics     Open Access   (Followers: 10)
Biodiversity Information Science and Standards     Open Access   (Followers: 1)
Bioinformatics     Hybrid Journal   (Followers: 216)
Bioinformatics Advances : Journal of the International Society for Computational Biology     Open Access   (Followers: 1)
Biomedical Engineering     Hybrid Journal   (Followers: 11)
Biomedical Engineering and Computational Biology     Open Access   (Followers: 11)
Briefings in Bioinformatics     Hybrid Journal   (Followers: 43)
British Journal of Educational Technology     Hybrid Journal   (Followers: 93)
Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics     Open Access  
c't Magazin fuer Computertechnik     Full-text available via subscription   (Followers: 1)
Cadernos do IME : Série Informática     Open Access  
CALCOLO     Hybrid Journal  
CALICO Journal     Full-text available via subscription   (Followers: 1)
Calphad     Hybrid Journal  
Canadian Journal of Electrical and Computer Engineering     Full-text available via subscription   (Followers: 14)
Catalysis in Industry     Hybrid Journal  
CCF Transactions on High Performance Computing     Hybrid Journal  
CCF Transactions on Pervasive Computing and Interaction     Hybrid Journal  
CEAS Space Journal     Hybrid Journal   (Followers: 6)
Cell Communication and Signaling     Open Access   (Followers: 3)
Central European Journal of Computer Science     Hybrid Journal   (Followers: 4)
CERN IdeaSquare Journal of Experimental Innovation     Open Access  
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 1)
Chaos, Solitons & Fractals : X     Open Access   (Followers: 1)
Chemometrics and Intelligent Laboratory Systems     Hybrid Journal   (Followers: 13)
ChemSusChem     Hybrid Journal   (Followers: 7)
China Communications     Full-text available via subscription   (Followers: 8)
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
Chip     Full-text available via subscription   (Followers: 3)
Ciencia     Open Access  
CIN : Computers Informatics Nursing     Hybrid Journal   (Followers: 11)
Circuits and Systems     Open Access   (Followers: 16)
CLEI Electronic Journal     Open Access  
Clin-Alert     Hybrid Journal   (Followers: 1)
Clinical eHealth     Open Access  
Cluster Computing     Hybrid Journal   (Followers: 1)
Cognitive Computation     Hybrid Journal   (Followers: 2)
Cognitive Computation and Systems     Open Access  
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 4)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 18)
Communication Methods and Measures     Hybrid Journal   (Followers: 12)
Communication Theory     Hybrid Journal   (Followers: 29)
Communications in Algebra     Hybrid Journal   (Followers: 1)
Communications in Partial Differential Equations     Hybrid Journal   (Followers: 2)
Communications of the ACM     Full-text available via subscription   (Followers: 59)
Communications of the Association for Information Systems     Open Access   (Followers: 15)
Communications on Applied Mathematics and Computation     Hybrid Journal   (Followers: 1)
COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering     Hybrid Journal   (Followers: 4)
Complex & Intelligent Systems     Open Access   (Followers: 1)
Complex Adaptive Systems Modeling     Open Access  
Complex Analysis and Operator Theory     Hybrid Journal   (Followers: 2)
Complexity     Hybrid Journal   (Followers: 8)
Computación y Sistemas     Open Access  
Computation     Open Access   (Followers: 1)
Computational and Applied Mathematics     Hybrid Journal   (Followers: 3)
Computational and Mathematical Methods     Hybrid Journal  
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 1)
Computational and Structural Biotechnology Journal     Open Access   (Followers: 1)
Computational and Theoretical Chemistry     Hybrid Journal   (Followers: 11)
Computational Astrophysics and Cosmology     Open Access   (Followers: 6)
Computational Biology and Chemistry     Hybrid Journal   (Followers: 13)
Computational Biology Journal     Open Access   (Followers: 6)
Computational Brain & Behavior     Hybrid Journal   (Followers: 1)
Computational Chemistry     Open Access   (Followers: 3)
Computational Communication Research     Open Access   (Followers: 1)
Computational Complexity     Hybrid Journal   (Followers: 5)
Computational Condensed Matter     Open Access   (Followers: 1)

        1 2 3 4 5 6 7 | Last

Similar Journals
Journal Cover
Journal Prestige (SJR): 0.217
Citation Impact (citeScore): 1
Number of Followers: 13  

  This is an Open Access Journal Open Access journal
ISSN (Print) 1999-4893
Published by MDPI Homepage  [84 journals]
  • Algorithms, Vol. 15, Pages 135: MAC Address Anonymization for Crowd

    • Authors: Jean-François Determe, Sophia Azzagnuni, François Horlin, Philippe De Doncker
      First page: 135
      Abstract: Research has shown that counting WiFi packets called probe requests (PRs) implicitly provides a proxy for the number of people in an area. In this paper, we discuss a crowd counting system involving WiFi sensors detecting PRs over the air, then extracting and anonymizing their media access control (MAC) addresses using a hash-based approach. This paper discusses an anonymization procedure and shows time-synchronization inaccuracies among sensors and hashing collision rates to be low enough to prevent anonymization from interfering with counting algorithms. In particular, we derive an approximation of the collision rate of uniformly distributed identifiers, with analytical error bounds.
      Citation: Algorithms
      PubDate: 2022-04-20
      DOI: 10.3390/a15050135
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 136: Guided Semi-Supervised Non-Negative Matrix

    • Authors: Pengyu Li, Christine Tseng, Yaxuan Zheng, Joyce A. Chew, Longxiu Huang, Benjamin Jarman, Deanna Needell
      First page: 136
      Abstract: Classification and topic modeling are popular techniques in machine learning that extract information from large-scale datasets. By incorporating a priori information such as labels or important features, methods have been developed to perform classification and topic modeling tasks; however, most methods that can perform both do not allow for guidance of the topics or features. In this paper, we propose a novel method, namely Guided Semi-Supervised Non-negative Matrix Factorization (GSSNMF), that performs both classification and topic modeling by incorporating supervision from both pre-assigned document class labels and user-designed seed words. We test the performance of this method on legal documents provided by the California Innocence Project and the 20 Newsgroups dataset. Our results show that the proposed method improves both classification accuracy and topic coherence in comparison to past methods such as Semi-Supervised Non-negative Matrix Factorization (SSNMF), Guided Non-negative Matrix Factorization (Guided NMF), and Topic Supervised NMF.
      Citation: Algorithms
      PubDate: 2022-04-20
      DOI: 10.3390/a15050136
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 137: Behavioral Game Theory Model in Pollution
           Control with Additional Supervision

    • Authors: Qianqian Gu, Lei Hang, Shaorong Sun
      First page: 137
      Abstract: The work is devoted to the study of the impact of external control on the strategies of pollutant discharge enterprises and government regulators in the field of environmental protection. The authors construct a model of the relationship between these entities. It is an evolutionary game in which the players are entities that generate pollutants and the government departments that implement pollution supervision. The choice of strategies of both of these entities and the evolutionary stability of the system controlled by different regulatory efforts, i.e., a third party, are analyzed. The authors then verify the evolutionary paths and evolutionary results of the model under different conditions using simulation analysis based on this model. The conducted research shows that the weak power of third-party supervision is not enough to promote the evolution of the behavioral decisions of the government and enterprises. An appropriate increase in the power of third-party supervision will change the choice of the government and enterprises strategies in the short term; however, due to the mutual influence of the strategies between both sides of the game, in this situation, the evolutionary system does not pursue a stable state. The strong power of third-party supervision will push enterprises to choose a pollution control strategy, change the intensity of government supervision, and replace government supervision to a certain extent. It is an interesting example of modeling the relationship of this system on the basis of evolutionary game theory. The findings can be regarded as a theoretical reference for environmental pollution control of enterprises.
      Citation: Algorithms
      PubDate: 2022-04-21
      DOI: 10.3390/a15050137
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 138: Adaptive Cooperative Control of Multiple
           Urban Rail Trains with Position Output Constraints

    • Authors: Junxia Yang, Youpeng Zhang
      First page: 138
      Abstract: This paper studies the distributed adaptive cooperative control of multiple urban rail trains with position output constraints and uncertain parameters. Based on an ordered set of trains running on the route, a dynamic multiple trains movement model is constructed to capture the dynamic evolution of the trains in actual operation. Aiming at the position constraints and uncertainties in the system, different distributed adaptive control algorithms are designed for all trains by using the local information about the position, speed and acceleration of the train operation, so that each train can dynamically adjust its speed through communicating with its neighboring trains. This control algorithm for each train is designed to track the desired position and speed curve, and the headway distance between any two neighboring trains is stable within a preset safety range, which guarantee the safety of tracking operation of multiple urban rail trains. Finally, the effectiveness of the designed scheme is verified by numerical examples.
      Citation: Algorithms
      PubDate: 2022-04-21
      DOI: 10.3390/a15050138
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 139: Cost-Sensitive Variational Autoencoding
           Classifier for Imbalanced Data Classification

    • Authors: Fen Liu, Quan Qian
      First page: 139
      Abstract: Classification is among the core tasks in machine learning. Existing classification algorithms are typically based on the assumption of at least roughly balanced data classes. When performing tasks involving imbalanced data, such classifiers ignore the minority data in consideration of the overall accuracy. The performance of traditional classification algorithms based on the assumption of balanced data distribution is insufficient because the minority-class samples are often more important than others, such as positive samples, in disease diagnosis. In this study, we propose a cost-sensitive variational autoencoding classifier that combines data-level and algorithm-level methods to solve the problem of imbalanced data classification. Cost-sensitive factors are introduced to assign a high cost to the misclassification of minority data, which biases the classifier toward minority data. We also designed misclassification costs closely related to tasks by embedding domain knowledge. Experimental results show that the proposed method performed the classification of bulk amorphous materials well.
      Citation: Algorithms
      PubDate: 2022-04-21
      DOI: 10.3390/a15050139
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 140: Real-Time Big Data Architecture for
           Processing Cryptocurrency and Social Media Data: A Clustering Approach
           Based on k-Means

    • Authors: Adrian Barradas, Acela Tejeda-Gil, Rosa-María Cantón-Croda
      First page: 140
      Abstract: Cryptocurrencies have recently emerged as financial assets that allow their users to execute transactions in a decentralized manner. Their popularity has led to the generation of huge amounts of data, specifically on social media networks such as Twitter. In this study, we propose an iterative kappa architecture that collects, processes, and temporarily stores data regarding transactions and tweets of two of the major cryptocurrencies according to their market capitalization: Bitcoin (BTC) and Ethereum (ETH). We applied a k-means clustering approach to group data according to their principal characteristics. Data are categorized into three groups: BTC typical data, ETH typical data, BTC and ETH atypical data. Findings show that activity on Twitter correlates to activity regarding the transactions of cryptocurrencies. It was also found that around 14% of data relate to extraordinary behaviors regarding cryptocurrencies. These data contain higher transaction volumes of both cryptocurrencies, and about 9.5% more social media publications in comparison with the rest of the data. The main advantages of the proposed architecture are its flexibility and its ability to relate data from various datasets.
      Citation: Algorithms
      PubDate: 2022-04-22
      DOI: 10.3390/a15050140
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 141: Proving Fixed-Point Theorems Employing
           Fuzzy (σ,𝒵)-Contractive-Type Mappings

    • Authors: Hayel N. Saleh, Mohammad Imdad, Salvatore Sessa, Ferdinando Di Martino
      First page: 141
      Abstract: In this article, the concept of fuzzy (σ,Z)-contractive mappings is introduced in the setting of fuzzy metric spaces. Thereafter, we utilize our newly introduced concept to prove some existence and uniqueness theorems in M-complete fuzzy metric spaces. Our obtained theorems extend and generalize the corresponding results in the existing literature. Moreover, some examples are adopted to exhibit the utility of the newly obtained results.
      Citation: Algorithms
      PubDate: 2022-04-22
      DOI: 10.3390/a15050141
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 142: Outsmarting Human Design in Airline
           Revenue Management

    • Authors: Giovanni Gatti Pinheiro, Michael Defoin-Platel, Jean-Charles Regin
      First page: 142
      Abstract: The accurate estimation of how future demand will react to prices is central to the optimization of pricing decisions. The systems responsible for demand prediction and pricing optimization are called revenue management (RM) systems, and, in the airline industry, they play an important role in the company’s profitability. As airlines’ current pricing decisions impact future knowledge of the demand behavior, the RM systems may have to compromise immediate revenue by efficiently performing price experiments with the expectation that the information gained about the demand behavior will lead to better future pricing decisions. This earning while learning (EWL) problem has captured the attention of both the industry and academia in recent years, resulting in many proposed solutions based on heuristic optimization. We take a different approach that does not depend on human-designed heuristics. We present the EWL problem to a reinforcement learning agent, and the agent’s goal is to maximize long-term revenue without explicitly considering the optimal way to perform price experimentation. The agent discovers through experience that “myopic” revenue-maximizing policies may lead to a decrease in the demand model quality (which it relies on to take decisions). We show that the agent finds novel pricing policies that balance revenue maximization and demand model quality in a surprisingly effective way, generating more revenue over the long run than current practices.
      Citation: Algorithms
      PubDate: 2022-04-22
      DOI: 10.3390/a15050142
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 143: Enhancing Attention’s Explanation
           Using Interpretable Tsetlin Machine

    • Authors: Rohan Kumar Yadav, Dragoş Constantin Nicolae
      First page: 143
      Abstract: Explainability is one of the key factors in Natural Language Processing (NLP) specially for legal documents, medical diagnosis, and clinical text. Attention mechanism has been a popular choice for such explainability recently by estimating the relative importance of input units. Recent research has revealed, however, that such processes tend to misidentify irrelevant input units when explaining them. This is due to the fact that language representation layers are initialized by pre-trained word embedding that is not context-dependent. Such a lack of context-dependent knowledge in the initial layer makes it difficult for the model to concentrate on the important aspects of input. Usually, this does not impact the performance of the model, but the explainability differs from human understanding. Hence, in this paper, we propose an ensemble method to use logic-based information from the Tsetlin Machine to embed it into the initial representation layer in the neural network to enhance the model in terms of explainability. We obtain the global clause score for each word in the vocabulary and feed it into the neural network layer as context-dependent information. Our experiments show that the ensemble method enhances the explainability of the attention layer without sacrificing any performance of the model and even outperforming in some datasets.
      Citation: Algorithms
      PubDate: 2022-04-22
      DOI: 10.3390/a15050143
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 144: Simple Black-Box Universal Adversarial
           Attacks on Deep Neural Networks for Medical Image Classification

    • Authors: Kazuki Koga, Kazuhiro Takemoto
      First page: 144
      Abstract: Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a single perturbation called universal adversarial perturbation (UAP), are a realistic security threat to the practical application of a DNN for medical imaging. Given that computer-based systems are generally operated under a black-box condition in which only input queries are allowed and outputs are accessible, the impact of UAPs seems to be limited because well-used algorithms for generating UAPs are limited to white-box conditions in which adversaries can access model parameters. Nevertheless, we propose a method for generating UAPs using a simple hill-climbing search based only on DNN outputs to demonstrate that UAPs are easily generatable using a relatively small dataset under black-box conditions with representative DNN-based medical image classifications. Black-box UAPs can be used to conduct both nontargeted and targeted attacks. Overall, the black-box UAPs showed high attack success rates (40–90%). The vulnerability of the black-box UAPs was observed in several model architectures. The results indicate that adversaries can also generate UAPs through a simple procedure under the black-box condition to foil or control diagnostic medical imaging systems based on DNNs, and that UAPs are a more serious security threat.
      Citation: Algorithms
      PubDate: 2022-04-22
      DOI: 10.3390/a15050144
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 145: Micro-Scale Spherical and Cylindrical
           Surface Modeling via Metaheuristic Algorithms and Micro Laser Line

    • Authors: J. Apolinar Muñoz Rodríguez
      First page: 145
      Abstract: With the increasing micro-scale manufacturing industry, the micro-scale spherical and cylindrical surface modeling has become an important factor in the manufacturing process. Thus, the micro-scale manufacturing processes require efficient micro-scale spherical and cylindrical models to achieve accurate assembly. Therefore, it is necessary to implement models to represent micro-scale spherical and cylindrical surfaces. This study addresses metaheuristic algorithms based on micro laser line projection to perform micro-scale spherical and cylindrical surface modeling. In this technique, the micro-scale surface is recovered by an optical microscope system, which computes the surface coordinates via micro laser line projection. From the surface coordinates, a genetic algorithm determines the parameters of the mathematical models to represent the spherical and cylindrical surfaces. The genetic algorithm performs exploration and exploitation in the search space to optimize the models’ mathematical parameters. The search space is constructed via surface data to provide the optimal parameters, which determine the spherical and cylindrical surface models. The proposed technique improves the fitting accuracy of the micro-scale spherical and cylindrical surface modeling performed via optical microscope systems. This contribution is elucidated by a discussion about the model fitting between the genetic algorithms based on micro laser line projection and the optical microscope systems.
      Citation: Algorithms
      PubDate: 2022-04-24
      DOI: 10.3390/a15050145
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 146: Large-Scale Multimodal Piano Music
           Identification Using Marketplace Fingerprinting

    • Authors: Daniel Yang, Arya Goutam, Kevin Ji, TJ Tsai
      First page: 146
      Abstract: This paper studies the problem of identifying piano music in various modalities using a single, unified approach called marketplace fingerprinting. The key defining characteristic of marketplace fingerprinting is choice: we consider a broad range of fingerprint designs based on a generalization of standard n-grams, and then select the fingerprint designs at runtime that are best for a specific query. We show that the large-scale retrieval problem can be framed as an economics problem in which a consumer and a store interact. In our analogy, the runtime search is like a consumer shopping in the store, the items for sale correspond to fingerprints, and purchasing an item corresponds to doing a fingerprint lookup in the database. Using basic principles of economics, we design an efficient marketplace in which the consumer has many options and adopts a rational buying strategy that explicitly considers the cost and expected utility of each item. We evaluate our marketplace fingerprinting approach on four different sheet music retrieval tasks involving sheet music images, MIDI files, and audio recordings. Using a database containing approximately 375,000 pages of sheet music, our method is able to achieve 0.91 mean reciprocal rank with sub-second average runtime on cell phone image queries. On all four retrieval tasks, the marketplace method substantially outperforms previous methods while simultaneously reducing average runtime. We present comprehensive experimental results, as well as detailed analyses to provide deeper intuition into system behavior.
      Citation: Algorithms
      PubDate: 2022-04-26
      DOI: 10.3390/a15050146
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 147: Squeezing Backbone Feature Distributions
           to the Max for Efficient Few-Shot Learning

    • Authors: Yuqing Hu, Stéphane Pateux, Vincent Gripon
      First page: 147
      Abstract: In many real-life problems, it is difficult to acquire or label large amounts of data, resulting in so-called few-shot learning problems. However, few-shot classification is a challenging problem due to the uncertainty caused by using few labeled samples. In the past few years, many methods have been proposed with the common aim of transferring knowledge acquired on a previously solved task, which is often achieved by using a pretrained feature extractor. As such, if the initial task contains many labeled samples, it is possible to circumvent the limitations of few-shot learning. A shortcoming of existing methods is that they often require priors about the data distribution, such as the balance between considered classes. In this paper, we propose a novel transfer-based method with a double aim: providing state-of-the-art performance, as reported on standardized datasets in the field of few-shot learning, while not requiring such restrictive priors. Our methodology is able to cope with both inductive cases, where prediction is performed on test samples independently from each other, and transductive cases, where a joint (batch) prediction is performed.
      Citation: Algorithms
      PubDate: 2022-04-26
      DOI: 10.3390/a15050147
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 148: On Information Granulation via Data
           Clustering for Granular Computing-Based Pattern Recognition: A Graph
           Embedding Case Study

    • Authors: Alessio Martino, Luca Baldini, Antonello Rizzi
      First page: 148
      Abstract: Granular Computing is a powerful information processing paradigm, particularly useful for the synthesis of pattern recognition systems in structured domains (e.g., graphs or sequences). According to this paradigm, granules of information play the pivotal role of describing the underlying (possibly complex) process, starting from the available data. Under a pattern recognition viewpoint, granules of information can be exploited for the synthesis of semantically sound embedding spaces, where common supervised or unsupervised problems can be solved via standard machine learning algorithms. In this work, we show a comparison between different strategies for the automatic synthesis of information granules in the context of graph classification. These strategies mainly differ on the specific topology adopted for subgraphs considered as candidate information granules and the possibility of using or neglecting the ground-truth class labels in the granulation process. Computational results on 10 different open-access datasets show that by using a class-aware granulation, performances tend to improve (regardless of the information granules topology), counterbalanced by a possibly higher number of information granules.
      Citation: Algorithms
      PubDate: 2022-04-27
      DOI: 10.3390/a15050148
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 149: Extreme Learning Machine Enhanced Gradient
           Boosting for Credit Scoring

    • Authors: Yao Zou, Changchun Gao
      First page: 149
      Abstract: Credit scoring is an effective tool for banks and lending companies to manage the potential credit risk of borrowers. Machine learning algorithms have made grand progress in automatic and accurate discrimination of good and bad borrowers. Notably, ensemble approaches are a group of powerful tools to enhance the performance of credit scoring. Random forest (RF) and Gradient Boosting Decision Tree (GBDT) have become the mainstream ensemble methods for precise credit scoring. RF is a Bagging-based ensemble that realizes accurate credit scoring enriches the diversity base learners by modifying the training object. However, the optimization pattern that works on invariant training targets may increase the statistical independence of base learners. GBDT is a boosting-based ensemble approach that reduces the credit scoring error by iteratively changing the training target while keeping the training features unchanged. This may harm the diversity of base learners. In this study, we incorporate the advantages of the Bagging ensemble training strategy and boosting ensemble optimization pattern to enhance the diversity of base learners. An extreme learning machine-based supervised augmented GBDT is proposed to enhance the discriminative ability for credit scoring. Experimental results on 4 public credit datasets show a significant improvement in credit scoring and suggest that the proposed method is a good solution to realize accurate credit scoring.
      Citation: Algorithms
      PubDate: 2022-04-27
      DOI: 10.3390/a15050149
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 150: An Emotion and Attention Recognition
           System to Classify the Level of Engagement to a Video Conversation by
           Participants in Real Time Using Machine Learning Models and Utilizing a
           Neural Accelerator Chip

    • Authors: Janith Kodithuwakku, Dilki Dandeniya Arachchi, Jay Rajasekera
      First page: 150
      Abstract: It is not an easy task for organizers to observe the engagement level of a video meeting audience. This research was conducted to build an intelligent system to enhance the experience of video conversations such as virtual meetings and online classrooms using convolutional neural network (CNN)- and support vector machine (SVM)-based machine learning models to classify the emotional states and the attention level of the participants to a video conversation. This application visualizes their attention and emotion analytics in a meaningful manner. This proposed system provides an artificial intelligence (AI)-powered analytics system with optimized machine learning models to monitor the audience and prepare insightful reports on the basis of participants’ facial features throughout the video conversation. One of the main objectives of this research is to utilize the neural accelerator chip to enhance emotion and attention detection tasks. A custom CNN developed by Gyrfalcon Technology Inc (GTI) named GnetDet was used in this system to run the trained model on their GTI Lightspeeur 2803 neural accelerator chip.
      Citation: Algorithms
      PubDate: 2022-04-27
      DOI: 10.3390/a15050150
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 151: Closed-Form Solution of the Bending
           Two-Phase Integral Model of Euler-Bernoulli Nanobeams

    • Authors: Efthimios Providas
      First page: 151
      Abstract: Recent developments have shown that the widely used simplified differential model of Eringen’s nonlocal elasticity in nanobeam analysis is not equivalent to the corresponding and initially proposed integral models, the pure integral model and the two-phase integral model, in all cases of loading and boundary conditions. This has resolved a paradox with solutions that are not in line with the expected softening effect of the nonlocal theory that appears in all other cases. In addition, it revived interest in the integral model and the two-phase integral model, which were not used due to their complexity in solving the relevant integral and integro-differential equations, respectively. In this article, we use a direct operator method for solving boundary value problems for nth order linear Volterra–Fredholm integro-differential equations of convolution type to construct closed-form solutions to the two-phase integral model of Euler–Bernoulli nanobeams in bending under transverse distributed load and various types of boundary conditions.
      Citation: Algorithms
      PubDate: 2022-04-28
      DOI: 10.3390/a15050151
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 152: Measuring the Non-Transitivity in Chess

    • Authors: Ricky Sanjaya, Jun Wang, Yaodong Yang
      First page: 152
      Abstract: In this paper, we quantify the non-transitivity in chess using human game data. Specifically, we perform non-transitivity quantification in two ways—Nash clustering and counting the number of rock–paper–scissor cycles—on over one billion matches from the Lichess and FICS databases. Our findings indicate that the strategy space of real-world chess strategies has a spinning top geometry and that there exists a strong connection between the degree of non-transitivity and the progression of a chess player’s rating. Particularly, high degrees of non-transitivity tend to prevent human players from making progress in their Elo ratings. We also investigate the implications of non-transitivity for population-based training methods. By considering fixed-memory fictitious play as a proxy, we conclude that maintaining large and diverse populations of strategies is imperative to training effective AI agents for solving chess.
      Citation: Algorithms
      PubDate: 2022-04-28
      DOI: 10.3390/a15050152
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 153: Process Mining in Clinical Practice: Model
           Evaluations in the Central Venous Catheter Installation Training

    • Authors: Gopi Battineni, Nalini Chintalapudi, Gregory Zacharewicz
      First page: 153
      Abstract: An acknowledgment of feedback is extremely helpful in medical training, as it may improve student skill development and provide accurate, unbiased feedback. Data are generated by hundreds of complicated and variable processes within healthcare including treatments, lab results, and internal logistics. Additionally, it is crucial to analyze medical training data to improve operational processes and eliminate bottlenecks. Therefore, the use of process mining (PM) along with conformance checking allows healthcare trainees to gain knowledge about instructor training. Researchers find it challenging to analyze the conformance between observations from event logs and predictions from models with artifacts from the training process. To address this conformance check, we modeled student activities and performance patterns in the training of Central Venous Catheter (CVC) installation. This work aims to provide medical trainees with activities with easy and interpretable outcomes. The two independent techniques for mining process models were fuzzy (i.e., for visualizing major activities) and inductive (i.e., for conformance checking at low threshold noise levels). A set of 20 discrete activity traces was used to validate conformance checks. Results show that 97.8% of the fitness of the model and the movement of the model occurred among the nine activities.
      Citation: Algorithms
      PubDate: 2022-04-29
      DOI: 10.3390/a15050153
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 154: Improving the Quantum Multi-Swarm
           Optimization with Adaptive Differential Evolution for Dynamic Environments

    • Authors: Vladimir Stanovov, Shakhnaz Akhmedova, Aleksei Vakhnin, Evgenii Sopov, Eugene Semenkin, Michael Affenzeller
      First page: 154
      Abstract: In this study, the modification of the quantum multi-swarm optimization algorithm is proposed for dynamic optimization problems. The modification implies using the search operators from differential evolution algorithm with a certain probability within particle swarm optimization to improve the algorithm’s search capabilities in dynamically changing environments. For algorithm testing, the Generalized Moving Peaks Benchmark was used. The experiments were performed for four benchmark settings, and the sensitivity analysis to the main parameters of algorithms is performed. It is shown that applying the mutation operator from differential evolution to the personal best positions of the particles allows for improving the algorithm performance.
      Citation: Algorithms
      PubDate: 2022-04-30
      DOI: 10.3390/a15050154
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 155: A Review of Modern Audio Deepfake
           Detection Methods: Challenges and Future Directions

    • Authors: Almutairi, Elgibreen
      First page: 155
      Abstract: A number of AI-generated tools are used today to clone human voices, leading to a new technology known as Audio Deepfakes (ADs). Despite being introduced to enhance human lives as audiobooks, ADs have been used to disrupt public safety. ADs have thus recently come to the attention of researchers, with Machine Learning (ML) and Deep Learning (DL) methods being developed to detect them. In this article, a review of existing AD detection methods was conducted, along with a comparative description of the available faked audio datasets. The article introduces types of AD attacks and then outlines and analyzes the detection methods and datasets for imitation- and synthetic-based Deepfakes. To the best of the authors’ knowledge, this is the first review targeting imitated and synthetically generated audio detection methods. The similarities and differences of AD detection methods are summarized by providing a quantitative comparison that finds that the method type affects the performance more than the audio features themselves, in which a substantial tradeoff between the accuracy and scalability exists. Moreover, at the end of this article, the potential research directions and challenges of Deepfake detection methods are discussed to discover that, even though AD detection is an active area of research, further research is still needed to address the existing gaps. This article can be a starting point for researchers to understand the current state of the AD literature and investigate more robust detection models that can detect fakeness even if the target audio contains accented voices or real-world noises.
      Citation: Algorithms
      PubDate: 2022-05-04
      DOI: 10.3390/a15050155
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 156: Binary Horse Optimization Algorithm for
           Feature Selection

    • Authors: Dorin Moldovan
      First page: 156
      Abstract: The bio-inspired research field has evolved greatly in the last few years due to the large number of novel proposed algorithms and their applications. The sources of inspiration for these novel bio-inspired algorithms are various, ranging from the behavior of groups of animals to the properties of various plants. One problem is the lack of one bio-inspired algorithm which can produce the best global solution for all types of optimization problems. The presented solution considers the proposal of a novel approach for feature selection in classification problems, which is based on a binary version of a novel bio-inspired algorithm. The principal contributions of this article are: (1) the presentation of the main steps of the original Horse Optimization Algorithm (HOA), (2) the adaptation of the HOA to a binary version called the Binary Horse Optimization Algorithm (BHOA), (3) the application of the BHOA in feature selection using nine state-of-the-art datasets from the UCI machine learning repository and the classifiers Random Forest (RF), Support Vector Machines (SVM), Gradient Boosted Trees (GBT), Logistic Regression (LR), K-Nearest Neighbors (K-NN), and Naïve Bayes (NB), and (4) the comparison of the results with the ones obtained using the Binary Grey Wolf Optimizer (BGWO), Binary Particle Swarm Optimization (BPSO), and Binary Crow Search Algorithm (BCSA). The experiments show that the BHOA is effective and robust, as it returned the best mean accuracy value and the best accuracy value for four and seven datasets, respectively, compared to BGWO, BPSO, and BCSA, which returned the best mean accuracy value for four, two, and two datasets, respectively, and the best accuracy value for eight, seven, and five datasets, respectively.
      Citation: Algorithms
      PubDate: 2022-05-06
      DOI: 10.3390/a15050156
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 157: Optimal Open-Loop Control of Discrete
           Deterministic Systems by Application of the Perch School Metaheuristic
           Optimization Algorithm

    • Authors: Andrei V. Panteleev, Anna A. Kolessa
      First page: 157
      Abstract: A new hybrid metaheuristic method for optimizing the objective function on a parallelepiped set of admissible solutions is proposed. It mimics the behavior of a school of river perch when looking for food. The algorithm uses the ideas of several methods: a frog-leaping method, migration algorithms, a cuckoo algorithm and a path-relinking procedure. As an application, a wide class of problems of finding the optimal control of deterministic discrete dynamical systems with a nonseparable performance criterion is chosen. For this class of optimization problems, it is difficult to apply the discrete maximum principle and its generalizations as a necessary optimality condition and the Bellman equation as a sufficient optimality condition. The desire to extend the class of problems to be solved to control problems of trajectory bundles and stochastic problems leads to the need to use not only classical adaptive random search procedures, but also new approaches combining the ideas of migration algorithms and swarm intelligence methods. The efficiency of this method is demonstrated and an analysis is performed by solving several optimal deterministic discrete control problems: two nonseparable problems (Luus–Tassone and LiHaimes) and five classic linear systems control problems with known exact solutions.
      Citation: Algorithms
      PubDate: 2022-05-07
      DOI: 10.3390/a15050157
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 158: PSO Optimized Active Disturbance Rejection
           Control for Aircraft Anti-Skid Braking System

    • Authors: Fengrui Xu, Mengqiao Chen, Xuelin Liang, Wensheng Liu
      First page: 158
      Abstract: A high-quality and secure touchdown run for an aircraft is essential for economic, operational, and strategic reasons. The shortest viable touchdown run without any skidding requires variable braking pressure to manage the friction between the road surface and braking tire at all times. Therefore, the manipulation and regulation of the anti-skid braking system (ABS) should be able to handle steady nonlinearity and undetectable disturbances and to regulate the wheel slip ratio to make sure that the braking system operates securely. This work proposes an active disturbance rejection control technique for the anti-skid braking system. The control law ensures action that is bounded and manageable, and the manipulating algorithm can ensure that the closed-loop machine works around the height factor of the secure area of the friction curve, thereby improving overall braking performance and safety. The stability of the proposed algorithm is proven primarily by means of Lyapunov-based strategies, and its effectiveness is assessed by means of simulations on a semi-physical aircraft brake simulation platform.
      Citation: Algorithms
      PubDate: 2022-05-10
      DOI: 10.3390/a15050158
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 159: SentenceLDA- and ConNetClus-Based
           Heterogeneous Academic Network Analysis for Publication Ranking

    • Authors: Jinsong Zhang, Bao Jin, Junyi Sha, Yan Chen, Yijin Zhang
      First page: 159
      Abstract: Scientific papers published in journals or conferences, also considered academic publications, are the manifestation of scientific research achievements. Lots of scientific papers published in digital form bring new challenges for academic evaluation and information retrieval. Therefore, research on the ranking method of scientific papers is significant for the management and evaluation of academic resources. In this paper, we first identify internal and external factors for evaluating scientific papers and propose a publication ranking method based on an analysis of a heterogeneous academic network. We use four types of metadata (i.e., author, venue (journal or conference), topic, and title) as vertexes for creating the network; in there, the topics are trained by the SentenceLDA algorithm with the metadata of the abstract. We then use the Gibbs sampling method to create a heterogeneous academic network and apply the ConNetClus algorithm to calculate the probability value of publication ranking. To evaluate the significance of the method proposed in this paper, we compare the ranking results with BM25, PageRank, etc., and homogeneous networks in MAP and NDCG. As shown in our evaluation results, the performance of the method we propose in this paper is better than other baselines for ranking publications.
      Citation: Algorithms
      PubDate: 2022-05-10
      DOI: 10.3390/a15050159
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 160: MKD: Mixup-Based Knowledge Distillation
           for Mandarin End-to-End Speech Recognition

    • Authors: Xing Wu, Yifan Jin, Jianjia Wang, Quan Qian, Yike Guo
      First page: 160
      Abstract: Large-scale automatic speech recognition model has achieved impressive performance. However, huge computational resources and massive amount of data are required to train an ASR model. Knowledge distillation is a prevalent model compression method which transfers the knowledge from large model to small model. To improve the efficiency of knowledge distillation for end-to-end speech recognition especially in the low-resource setting, a Mixup-based Knowledge Distillation (MKD) method is proposed which combines Mixup, a data-agnostic data augmentation method, with softmax-level knowledge distillation. A loss-level mixture is presented to address the problem caused by the non-linearity of label in the KL-divergence when adopting Mixup to the teacher–student framework. It is mathematically shown that optimizing the mixture of loss function is equivalent to optimize an upper bound of the original knowledge distillation loss. The proposed MKD takes the advantage of Mixup and brings robustness to the model even with a small amount of training data. The experiments on Aishell-1 show that MKD obtains a 15.6% and 3.3% relative improvement on two student models with different parameter scales compared with the existing methods. Experiments on data efficiency demonstrate MKD achieves similar results with only half of the original dataset.
      Citation: Algorithms
      PubDate: 2022-05-11
      DOI: 10.3390/a15050160
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 161: Optimized Score Level Fusion for
           Multi-Instance Finger Vein Recognition

    • Authors: Jackson Horlick Teng, Thian Song Ong, Tee Connie, Kalaiarasi Sonai Muthu Anbananthen, Pa Pa Min
      First page: 161
      Abstract: The finger vein recognition system uses blood vessels inside the finger of an individual for identity verification. The public is in favor of a finger vein recognition system over conventional passwords or ID cards as the biometric technology is harder to forge, misplace, and share. In this study, the histogram of oriented gradients (HOG) features, which are robust against changes in illumination and position, are extracted from the finger vein for personal recognition. To further increase the amount of information that can be used for recognition, different instances of the finger vein, ranging from the index, middle, and ring finger are combined to form a multi-instance finger vein representation. This fusion approach is preferred since it can be performed without requiring additional sensors or feature extractors. To combine different instances of finger vein effectively, score level fusion is adopted to allow greater compatibility among the wide range of matches. Towards this end, two methods are proposed: Bayesian optimized support vector machine (SVM) score fusion (BSSF) and Bayesian optimized SVM based fusion (BSBF). The fusion results are incrementally improved by optimizing the hyperparameters of the HOG feature, SVM matcher, and the weighted sum of score level fusion using the Bayesian optimization approach. This is considered a kind of knowledge-based approach that takes into account the previous optimization attempts or trials to determine the next optimization trial, making it an efficient optimizer. By using stratified cross-validation in the training process, the proposed method is able to achieve the lowest EER of 0.48% and 0.22% for the SDUMLA-HMT dataset and UTFVP dataset, respectively.
      Citation: Algorithms
      PubDate: 2022-05-11
      DOI: 10.3390/a15050161
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 162: Experimental Validation of Ellipsoidal
           Techniques for State Estimation in Marine Applications

    • Authors: Andreas Rauh, Yohann Gourret, Katell Lagattu, Bernardo Hummes, Luc Jaulin, Johannes Reuter, Stefan Wirtensohn, Patrick Hoher
      First page: 162
      Abstract: A reliable quantification of the worst-case influence of model uncertainty and external disturbances is crucial for the localization of vessels in marine applications. This is especially true if uncertain GPS-based position measurements are used to update predicted vessel locations that are obtained from the evaluation of a ship’s state equation. To reflect real-life working conditions, these state equations need to account for uncertainty in the system model, such as imperfect actuation and external disturbances due to effects such as wind and currents. As an application scenario, the GPS-based localization of autonomous DDboat robots is considered in this paper. Using experimental data, the efficiency of an ellipsoidal approach, which exploits a bounded-error representation of disturbances and uncertainties, is demonstrated.
      Citation: Algorithms
      PubDate: 2022-05-11
      DOI: 10.3390/a15050162
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 163: Linking Off-Road Points to Routing

    • Authors: Dominik Köppl
      First page: 163
      Abstract: Although graph theory has already been introduced in spatial reasoning, current spatial database systems do not provide out-of-the-box routing on geometric points that are not matched on the graph. Methods that connect new reference locations to the graph render different routing results. Moreover, current solutions break reasoning down to local analysis. We bridge the gap between routing networks and spatial geometry by a global matching of geometric points to routing networks.
      Citation: Algorithms
      PubDate: 2022-05-12
      DOI: 10.3390/a15050163
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 164: A Parallelizable Integer Linear
           Programming Approach for Tiling Finite Regions of the Plane with

    • Authors: Garvie, Burkardt
      First page: 164
      Abstract: The general problem of tiling finite regions of the plane with polyominoes is NP-complete, and so the associated computational geometry problem rapidly becomes intractable for large instances. Thus, the need to reduce algorithm complexity for tiling is important and continues as a fruitful area of research. Traditional approaches to tiling with polyominoes use backtracking, which is a refinement of the ‘brute-force’ solution procedure for exhaustively finding all solutions to a combinatorial search problem. In this work, we combine checkerboard colouring techniques with a recently introduced integer linear programming (ILP) technique for tiling with polyominoes. The colouring arguments often split large tiling problems into smaller subproblems, each represented as a separate ILP problem. Problems that are amenable to this approach are embarrassingly parallel, and our work provides proof of concept of a parallelizable algorithm. The main goal is to analyze when this approach yields a potential parallel speedup. The novel colouring technique shows excellent promise in yielding a parallel speedup for finding large tiling solutions with ILP, particularly when we seek a single (optimal) solution. We also classify the tiling problems that result from applying our colouring technique according to different criteria and compute representative  examples using a combination of MATLAB and CPLEX, a commercial optimization package that can solve ILP problems. The collections of MATLAB programs PARIOMINOES (v3.0.0) and POLYOMINOES (v2.1.4) used to construct the ILP problems are freely available for download.
      Citation: Algorithms
      PubDate: 2022-05-12
      DOI: 10.3390/a15050164
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 165: Smart(Sampling)Augment: Optimal and
           Efficient Data Augmentation for Semantic Segmentation

    • Authors: Misgana Negassi, Diane Wagner, Alexander Reiterer
      First page: 165
      Abstract: Data augmentation methods enrich datasets with augmented data to improve the performance of neural networks. Recently, automated data augmentation methods have emerged, which automatically design augmentation strategies. The existing work focuses on image classification and object detection, whereas we provide the first study on semantic image segmentation and introduce two new approaches: SmartAugment and SmartSamplingAugment. SmartAugment uses Bayesian Optimization to search a rich space of augmentation strategies and achieves new state-of-the-art performance in all semantic segmentation tasks we consider. SmartSamplingAugment, a simple parameter-free approach with a fixed augmentation strategy, competes in performance with the existing resource-intensive approaches and outperforms cheap state-of-the-art data augmentation methods. Furthermore, we analyze the impact, interaction, and importance of data augmentation hyperparameters and perform ablation studies, which confirm our design choices behind SmartAugment and SmartSamplingAugment. Lastly, we will provide our source code for reproducibility and to facilitate further research.
      Citation: Algorithms
      PubDate: 2022-05-16
      DOI: 10.3390/a15050165
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 166: Efficient Machine Learning Models for
           Early Stage Detection of Autism Spectrum Disorder

    • Authors: Mousumi Bala, Mohammad Hanif Ali, Md. Shahriare Satu, Khondokar Fida Hasan, Mohammad Ali Moni
      First page: 166
      Abstract: Autism spectrum disorder (ASD) is a neurodevelopmental disorder that severely impairs an individual’s cognitive, linguistic, object recognition, communication, and social abilities. This situation is not treatable, although early detection of ASD can assist to diagnose and take proper steps for mitigating its effect. Using various artificial intelligence (AI) techniques, ASD can be detected an at earlier stage than with traditional methods. The aim of this study was to propose a machine learning model that investigates ASD data of different age levels and to identify ASD more accurately. In this work, we gathered ASD datasets of toddlers, children, adolescents, and adults and used several feature selection techniques. Then, different classifiers were applied into these datasets, and we assessed their performance with evaluation metrics including predictive accuracy, kappa statistics, the f1-measure, and AUROC. In addition, we analyzed the performance of individual classifiers using a non-parametric statistical significant test. For the toddler, child, adolescent, and adult datasets, we found that Support Vector Machine (SVM) performed better than other classifiers where we gained 97.82% accuracy for the RIPPER-based toddler subset; 99.61% accuracy for the Correlation-based feature selection (CFS) and Boruta CFS intersect (BIC) method-based child subset; 95.87% accuracy for the Boruta-based adolescent subset; and 96.82% accuracy for the CFS-based adult subset. Then, we applied the Shapley Additive Explanations (SHAP) method into different feature subsets, which gained the highest accuracy and ranked their features based on the analysis.
      Citation: Algorithms
      PubDate: 2022-05-16
      DOI: 10.3390/a15050166
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 167: Construction of Life-Cycle Simulation
           Framework of Chronic Diseases and Their Comorbidities Based on Population

    • Authors: Peixia Sun, Shengxiong Lao, Dongyang Du, Jiqiang Peng, Xu Yang
      First page: 167
      Abstract: Life-cycle population follow-up data collection is time-consuming and often takes decades. General cohort data studies collect short-to-medium-term data from populations of different age groups. The purpose of constructing a life-cycle simulation method is to find an efficient and reliable way to achieve the way to characterize life-cycle disease metastasis from these short-to-medium-term data. In this paper, we have presented our effort at construction of a full lifetime population cohort simulation framework. The design aim is to generate a comprehensive understanding of the disease transition for full lifetime when we only have short-or-medium term population cohort data. We have conducted several groups of experiments to show the effectiveness of our method.
      Citation: Algorithms
      PubDate: 2022-05-16
      DOI: 10.3390/a15050167
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 168: Approaches to Parameter Estimation from
           Model Neurons and Biological Neurons

    • Authors: Alain Nogaret
      First page: 168
      Abstract: Model optimization in neuroscience has focused on inferring intracellular parameters from time series observations of the membrane voltage and calcium concentrations. These parameters constitute the fingerprints of ion channel subtypes and may identify ion channel mutations from observed changes in electrical activity. A central question in neuroscience is whether computational methods may obtain ion channel parameters with sufficient consistency and accuracy to provide new information on the underlying biology. Finding single-valued solutions in particular, remains an outstanding theoretical challenge. This note reviews recent progress in the field. It first covers well-posed problems and describes the conditions that the model and data need to meet to warrant the recovery of all the original parameters—even in the presence of noise. The main challenge is model error, which reflects our lack of knowledge of exact equations. We report on strategies that have been partially successful at inferring the parameters of rodent and songbird neurons, when model error is sufficiently small for accurate predictions to be made irrespective of stimulation.
      Citation: Algorithms
      PubDate: 2022-05-20
      DOI: 10.3390/a15050168
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 169: Stimulation Montage Achieves Balanced
           Focality and Intensity

    • Authors: Yushan Wang, Jonathan Brand, Wentai Liu
      First page: 169
      Abstract: Transcranial direct current stimulation (tDCS) is a non-invasive neuromodulation technique to treat brain disorders by using a constant, low current to stimulate targeted cortex regions. Compared to the conventional tDCS that uses two large pad electrodes, multiple electrode tDCS has recently received more attention. It is able to achieve better stimulation performance in terms of stimulation intensity and focality. In this paper, we first establish a computational model of tDCS, and then propose a novel optimization algorithm using a regularization matrix λ to explore the balance between stimulation intensity and focality. The simulation study is designed such that the performance of state-of-the-art algorithms and the proposed algorithm can be compared via quantitative evaluation. The results show that the proposed algorithm not only achieves desired intensity, but also smaller target error and better focality. Robustness analysis indicates that the results are stable within the ranges of scalp and cerebrospinal fluid (CSF) conductivities, while the skull conductivity is most sensitive and should be carefully considered in real clinical applications.
      Citation: Algorithms
      PubDate: 2022-05-20
      DOI: 10.3390/a15050169
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 170: Agglomerative Clustering with Threshold
           Optimization via Extreme Value Theory

    • Authors: Chunchun Li, Manuel Günther, Akshay Raj Dhamija, Steve Cruz, Mohsen Jafarzadeh, Touqeer Ahmad, Terrance E. Boult
      First page: 170
      Abstract: Clustering is a critical part of many tasks and, in most applications, the number of clusters in the data are unknown and must be estimated. This paper presents an Extreme Value Theory-based approach to threshold selection for clustering, proving that the “correct” linkage distances must follow a Weibull distribution for smooth feature spaces. Deep networks and their associated deep features have transformed many aspects of learning, and this paper shows they are consistent with our extreme-linkage theory and provide Unreasonable Clusterability. We show how our novel threshold selection can be applied to both classic agglomerative clustering and the more recent FINCH (First Integer Neighbor Clustering Hierarchy) algorithm. Our evaluation utilizes over a dozen different large-scale vision datasets/subsets, including multiple face-clustering datasets and ImageNet for both in-domain and, more importantly, out-of-domain object clustering. Across multiple deep features clustering tasks with very different characteristics, our novel automated threshold selection performs well, often outperforming state-of-the-art clustering techniques even when they select parameters on the test set.
      Citation: Algorithms
      PubDate: 2022-05-20
      DOI: 10.3390/a15050170
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 171: Research on an Optimal Path Planning
           Method Based on A* Algorithm for Multi-View Recognition

    • Authors: Xinning Li, Qun He, Qin Yang, Neng Wang, Hu Wu, Xianhai Yang
      First page: 171
      Abstract: In order to obtain the optimal perspectives of the recognition target, this paper combines the motion path of the manipulator arm and camera. A path planning method to find the optimal perspectives based on an A* algorithm is proposed. The quality of perspectives is represented by means of multi-view recognition. A binary multi-view 2D kernel principal component analysis network (BM2DKPCANet) is built to extract features. The multi-view angles classifier based on BM2DKPCANet + Softmax is established, which outputs category posterior probability to represent the perspective recognition performance function. The path planning problem is transformed into a multi-objective optimization problem by taking the optimal view recognition and the shortest path distance as the objective functions. In order to reduce the calculation, the multi-objective optimization problem is transformed into a single optimization problem by fusing the objective functions based on the established perspective observation directed graph model. An A* algorithm is used to solve the single source shortest path problem of the fused directed graph. The path planning experiments with different numbers of view angles and different starting points demonstrate that the method can guide the camera to reach the viewpoint with higher recognition accuracy and complete the optimal observation path planning.
      Citation: Algorithms
      PubDate: 2022-05-20
      DOI: 10.3390/a15050171
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 172: A Tailored Pricing Strategy for Different
           Types of Users in Hybrid Carsharing Systems

    • Authors: Rongqin Lu, Xiaomei Zhao, Yingqi Wang
      First page: 172
      Abstract: Considering the characteristics of different types of users in hybrid carsharing systems, in which sharing autonomous vehicles (SAVs) and conventional sharing cars (CSCs) coexist, a tailored pricing strategy (TPS) is proposed to maximize the operator’s profit and minimize all users’ costs. The fleet sizes and sizes of SAVs’ stations are also determined simultaneously. A bi-objective nonlinear programming model is established, and a genetic algorithm is applied to solve it. Based on the operational data in Lanzhou, China, carsharing users are clustered into three types. They are loyal users, losing users, and potential users, respectively. Results show the application of the TPS can help the operator increase profit and attract more users. The loyal users are assigned the highest price, while they still contribute the most to the operator’s profit with the highest number of carsharing trips. The losing users and potential users are comparable in terms of the number of trips, while the latter still makes more profit.
      Citation: Algorithms
      PubDate: 2022-05-20
      DOI: 10.3390/a15050172
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 173: Integrating Process Mining with
           Discrete-Event Simulation for Dynamic Productivity Estimation in Heavy
           Civil Construction Operations

    • Authors: Khandakar M. Rashid, Joseph Louis
      First page: 173
      Abstract: Construction companies are increasingly utilizing sensing technologies to automatically record different steps of the construction process in detail for effective monitoring and control. This generates a significant amount of event data that can be used to learn the underlying behavior of agents in a construction site using process mining. While process mining can be used to discover the real process and identify and analyze deviations and bottlenecks in operations, it is a backward-looking approach. On the other hand, discrete event simulation (DES) provides a means to forecast future performance from historical data to enable proactive decision-making by operation managers relating to their projects. However, this method is largely unused by the industry due to the specialized knowledge required to create the DES models. This paper thus proposes a framework that extends the utility of collecting event data and their process models, by transforming them into DES models for forecasting future performance. This framework also addresses another challenge of using DES relating to its inability to update itself as the project progresses. This challenge is addressed by using the Bayesian updating technique to continuously update the input parameters of the simulation model for the most up-to-date estimation based on data collected from the field. The proposed framework was validated on a real-world case study of an earthmoving operation. The results show that the process mining techniques could accurately discover the process model from the event data collected from the field. Furthermore, it was noted that continuous updating of DES model input parameters can provide accurate and reliable productivity estimates based on the actual data generated from the field. The proposed framework can help stakeholders to discover the underlying sequence of their operations, and enable timely, data-driven decisions regarding operations control.
      Citation: Algorithms
      PubDate: 2022-05-21
      DOI: 10.3390/a15050173
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 174: Detecting and Responding to Concept Drift
           in Business Processes

    • Authors: Lingkai Yang, Sally McClean, Mark Donnelly, Kevin Burke, Kashaf Khan
      First page: 174
      Abstract: Concept drift, which refers to changes in the underlying process structure or customer behaviour over time, is inevitable in business processes, causing challenges in ensuring that the learned model is a proper representation of the new data. Due to factors such as seasonal effects and policy updates, concept drifts can occur in customer transitions and time spent throughout the process, either suddenly or gradually. In a concept drift context, we can discard the old data and retrain the model using new observations (sudden drift) or combine the old data with the new data to update the model (gradual drift) or maintain the model as unchanged (no drift). In this paper, we model a response to concept drift as a sequential decision making problem by combing a hierarchical Markov model and a Markov decision process (MDP). The approach can detect concept drift, retrain the model and update customer profiles automatically. We validate the proposed approach on 68 artificial datasets and a real-world hospital billing dataset, with experimental results showing promising performance.
      Citation: Algorithms
      PubDate: 2022-05-21
      DOI: 10.3390/a15050174
      Issue No: Vol. 15, No. 5 (2022)
  • Algorithms, Vol. 15, Pages 102: Dynamic Line Scan Thermography Parameter
           Design via Gaussian Process Emulation

    • Authors: Simon Verspeek, Ivan De Boi, Xavier Maldague, Rudi Penne, Gunther Steenackers
      First page: 102
      Abstract: We address the challenge of determining a valid set of parameters for a dynamic line scan thermography setup. Traditionally, this optimization process is labor- and time-intensive work, even for an expert skilled in the art. Nowadays, simulations in software can reduce some of that burden. However, when faced with many parameters to optimize, all of which cover a large range of values, this is still a time-consuming endeavor. A large number of simulations are needed to adequately capture the underlying physical reality. We propose to emulate the simulator by means of a Gaussian process. This statistical model serves as a surrogate for the simulations. To some extent, this can be thought of as a “model of the model”. Once trained on a relative low amount of data points, this surrogate model can be queried to answer various engineering design questions. Moreover, the underlying model, a Gaussian process, is stochastic in nature. This allows for uncertainty quantification in the outcomes of the queried model, which plays an important role in decision making or risk assessment. We provide several real-world examples that demonstrate the usefulness of this method.
      Citation: Algorithms
      PubDate: 2022-03-22
      DOI: 10.3390/a15040102
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 103: A Review on the Performance of Linear and
           Mixed Integer Two-Stage Stochastic Programming Software

    • Authors: Juan J. Torres, Can Li, Robert M. Apap, Ignacio E. Grossmann
      First page: 103
      Abstract: This paper presents a tutorial on the state-of-the-art software for the solution of two-stage (mixed-integer) linear stochastic programs and provides a list of software designed for this purpose. The methodologies are classified according to the decomposition alternatives and the types of the variables in the problem. We review the fundamentals of Benders decomposition, dual decomposition and progressive hedging, as well as possible improvements and variants. We also present extensive numerical results to underline the properties and performance of each algorithm using software implementations, including DECIS, FORTSP, PySP, and DSP. Finally, we discuss the strengths and weaknesses of each methodology and propose future research directions.
      Citation: Algorithms
      PubDate: 2022-03-22
      DOI: 10.3390/a15040103
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 104: Federated Learning for Intrusion Detection
           in the Critical Infrastructures: Vertically Partitioned Data Use Case

    • Authors: Evgenia Novikova, Elena Doynikova, Sergey Golubev
      First page: 104
      Abstract: One of the challenges in the Internet of Things systems is the security of the critical data, for example, data used for intrusion detection. The paper research construction of an intrusion detection system that ensures the confidentiality of critical data at a given level of intrusion detection accuracy. For this goal, federated learning is used to train an intrusion detection model. Federated learning is a computational model for distributed machine learning that allows different collaborating entities to train one global model without sharing data. This paper considers the case when entities have data that are different in attributes.
      Authors believe that it is a common situation for the critical systems constructed using Internet of Things (IoT) technology, when industrial objects are monitored by different sets of sensors. To evaluate the applicability of the federated learning for this case, the authors developed an approach and an architecture of the intrusion detection system for vertically partitioned data that consider the principles of federated learning and conducted the series of experiments. To model vertically partitioned data, the authors used the Secure Water Treatment (SWaT) data set that describes the functioning of the water treatment facility. The conducted experiments demonstrate that the accuracy of the intrusion detection model trained using federated learning is compared with the accuracy of the intrusion detection model trained using the centralized machine learning model. However, the computational efficiency of the learning and inference process is currently extremely low. It is explained by the application of homomorphic encryption for input data protection from different data owners or data sources. This defines the necessity to elaborate techniques for generating attributes that could model horizontally partitioned data even for the cases when the collaborating entities share datasets that differ in their attributes.
      Citation: Algorithms
      PubDate: 2022-03-23
      DOI: 10.3390/a15040104
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 105: Analyzing Markov Boundary Discovery
           Algorithms in Ideal Conditions Using the d-Separation Criterion

    • Authors: Camil Băncioiu, Remus Brad
      First page: 105
      Abstract: This article proposes the usage of the d-separation criterion in Markov Boundary Discovery algorithms, instead of or alongside the statistical tests of conditional independence these algorithms usually rely on. This is a methodological improvement applicable when designing, studying or improving such algorithms, but it is not applicable for productive use, because computing the d-separation criterion requires complete knowledge of a Bayesian network. Yet Bayesian networks can be made available to the algorithms when studied in controlled conditions. This approach has the effect of removing sources of suboptimal behavior, allowing the algorithms to perform at their theoretical best and providing insights about their properties. The article also discusses an extension of this approach, namely to use d-separation as a complement to the usual statistical tests performed on synthetic datasets in order to ascertain the overall accuracy of the tests chosen by the algorithms, for further insights into their behavior. To exemplify these two approaches, two Markov Boundary Discovery algorithms were used, namely the Incremental Association Markov Blanket algorithm and the Iterative Parent–Child-Based Search of Markov Blanket algorithm. Firstly, these algorithms were configured to use d-separation alone as their conditional independence test, computed on known Bayesian networks. Subsequently, the algorithms were configured to use the statistical G-test complemented by d-separation to evaluate their behavior on synthetic data.
      Citation: Algorithms
      PubDate: 2022-03-23
      DOI: 10.3390/a15040105
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 106: Forecast of Medical Costs in Health
           Companies Using Models Based on Advanced Analytics

    • Authors: Daniel Ricardo Sandoval Serrano, Juan Carlos Rincón, Julián Mejía-Restrepo, Edward Rolando Núñez-Valdez, Vicente García-Díaz
      First page: 106
      Abstract: Forecasting medical costs is crucial for planning, budgeting, and efficient decision making in the health industry. This paper introduces a proposal to forecast costs through techniques such as a standard model of long short-term memory (LSTM); and patient grouping through k-means clustering in the Keralty group, one of Colombia’s leading healthcare companies. It is important to highlight its implications for the prediction of cost time series in the health sector from a retrospective analysis of the information of services invoiced to health companies. It starts with the selection of sociodemographic variables related to the patient, such as age, gender and marital status, and it is complemented with health variables such as patient comorbidities (cohorts) and induced variables, such as service provision frequency and time elapsed since the last consultation (hereafter referred to as “recency”). Our results suggest that greater accuracy can be achieved by first clustering and then using LSTM networks. This implies that a correct segmentation of the population according to the usage of services represented in costs must be performed beforehand. Through the analysis, a cost projection from 1 to 3 months can be conducted, allowing a comparison with historical data. The reliability of the model is validated by different metrics such as RMSE and Adjusted R2. Overall, this study is intended to be useful for healthcare managers in developing a strategy for medical cost forecasting. We conclude that the use of analytical tools allows the organization to make informed decisions and to develop strategies for optimizing resources with the identified population.
      Citation: Algorithms
      PubDate: 2022-03-23
      DOI: 10.3390/a15040106
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 107: KMC3 and CHTKC: Best Scenarios,
           Deficiencies, and Challenges in High-Throughput Sequencing Data Analysis

    • Authors: Deyou Tang, Daqiang Tan, Weihao Xiao, Jiabin Lin, Juan Fu
      First page: 107
      Abstract: Background: K-mer frequency counting is an upstream process of many bioinformatics data analysis workflows. KMC3 and CHTKC are the representative partition-based k-mer counting and non-partition-based k-mer counting algorithms, respectively. This paper evaluates the two algorithms and presents their best applicable scenarios and potential improvements using multiple hardware contexts and datasets. Results: KMC3 uses less memory and runs faster than CHTKC on a regular configuration server. CHTKC is efficient on high-performance computing platforms with high available memory, multi-thread, and low IO bandwidth. When tested with various datasets, KMC3 is less sensitive to the number of distinct k-mers and is more efficient for tasks with relatively low sequencing quality and long k-mer. CHTKC performs better than KMC3 in counting assignments with large-scale datasets, high sequencing quality, and short k-mer. Both algorithms are affected by IO bandwidth, and decreasing the influence of the IO bottleneck is critical as our tests show improvement by filtering and compressing consecutive first-occurring k-mers in KMC3. Conclusions: KMC3 is more competitive for running counter on ordinary hardware resources, and CHTKC is more competitive for counting k-mers in super-scale datasets on higher-performance computing platforms. Reducing the influence of the IO bottleneck is essential for optimizing the k-mer counting algorithm, and filtering and compressing low-frequency k-mers is critical in relieving IO impact.
      Citation: Algorithms
      PubDate: 2022-03-24
      DOI: 10.3390/a15040107
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 108: Trinity: Neural Network Adaptive
           Distributed Parallel Training Method Based on Reinforcement Learning

    • Authors: Yan Zeng, Jiyang Wu, Jilin Zhang, Yongjian Ren, Yunquan Zhang
      First page: 108
      Abstract: Deep learning, with increasingly large datasets and complex neural networks, is widely used in computer vision and natural language processing. A resulting trend is to split and train large-scale neural network models across multiple devices in parallel, known as parallel model training. Existing parallel methods are mainly based on expert design, which is inefficient and requires specialized knowledge. Although automatically implemented parallel methods have been proposed to solve these problems, these methods only consider a single optimization aspect of run time. In this paper, we present Trinity, an adaptive distributed parallel training method based on reinforcement learning, to automate the search and tuning of parallel strategies. We build a multidimensional performance evaluation model and use proximal policy optimization to co-optimize multiple optimization aspects. Our experiment used the CIFAR10 and PTB datasets based on InceptionV3, NMT, NASNet and PNASNet models. Compared with Google’s Hierarchical method, Trinity achieves up to 5% reductions in runtime, communication, and memory overhead, and up to a 40% increase in parallel strategy search speeds.
      Citation: Algorithms
      PubDate: 2022-03-24
      DOI: 10.3390/a15040108
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 109: Skeptical Learning—An Algorithm and
           a Platform for Dealing with Mislabeling in Personal Context Recognition

    • Authors: Wanyi Zhang, Mattia Zeni, Andrea Passerini, Fausto Giunchiglia
      First page: 109
      Abstract: Mobile Crowd Sensing (MCS) is a novel IoT paradigm where sensor data, as collected by the user’s mobile devices, are integrated with user-generated content, e.g., annotations, self-reports, or images. While providing many advantages, the human involvement also brings big challenges, where the most critical is possibly the poor quality of human-provided content, most often due to the inaccurate input from non-expert users. In this paper, we propose Skeptical Learning, an interactive machine learning algorithm where the machine checks the quality of the user feedback and tries to fix it when a problem arises. In this context, the user feedback consists of answers to machine generated questions, at times defined by the machine. The main idea is to integrate three core elements, which are (i) sensor data , (ii) user answers, and (iii) existing prior knowledge of the world, and to enable a second round of validation with the user any time these three types of information jointly generate an inconsistency. The proposed solution is evaluated in a project focusing on a university student life scenario. The main goal of the project is to recognize the locations and transportation modes of the students. The results highlight an unexpectedly high pervasiveness of user mistakes in the university students life project. The results also shows the advantages provided by Skeptical Learning in dealing with the mislabeling issues in an interactive way and improving the prediction performance.
      Citation: Algorithms
      PubDate: 2022-03-24
      DOI: 10.3390/a15040109
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 110: Numerical Simulation of Micro-Bubbles
           Dispersion by Surface Waves

    • Authors: Oleg A. Druzhinin, Wu-Ting Tsai
      First page: 110
      Abstract: This paper presents an algorithm for numerical modeling of bubble dispersion occurring in the near-surface layer of the upper ocean under the action of non-breaking two-dimensional (2D) surface waves. The algorithm is based on a Eulerian-Lagrangian approach where full, 3D Navier-Stokes equations for the carrier flow induced by a waved water surface are solved in a Eulerian frame, and the trajectories of individual bubbles are simultaneously tracked in a Lagrangian frame, taking into account the impact of the bubbles on the carrier flow. The bubbles diameters are considered in the range from 200 to 400 microns (thus, micro-bubbles), and the effects related to the bubbles deformation and dissolution in water are neglected. The algorithm allows evaluation of the instantaneous as well as statistically stationary, phase-averaged profiles of the carrier-flow turbulence, bubble concentration (void fraction) and void-fraction fluxes for different flow regimes, both with and without wind-induced surface drift. The simulations results show that bubbles are capable of enhancing the carrier-flow turbulence, as compared to the bubble-free flow, and that the vertical water velocity fluctuations are mostly augmented, and increasingly so by larger bubbles. The results also show that the bubbles dynamics are governed by buoyancy, the surrounding fluid acceleration force and the drag force whereas the impact of the lift force remains negligible.
      Citation: Algorithms
      PubDate: 2022-03-24
      DOI: 10.3390/a15040110
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 111: A Variable Step Size Normalized
           Least-Mean-Square Algorithm Based on Data Reuse

    • Authors: Alexandru-George Rusu, Constantin Paleologu, Jacob Benesty, Silviu Ciochină
      First page: 111
      Abstract: The principal issue in acoustic echo cancellation (AEC) is to estimate the impulse response between the loudspeaker and microphone of a hands-free communication device. This application can be addressed as a system identification problem, which can be solved by using an adaptive filter. The most common one for AEC is the normalized least-mean-square (NLMS) algorithm. It is known that the overall performance of this algorithm is controlled by the value of its normalized step size parameter. In order to obtain a proper compromise between the main performance criteria (e.g., convergence rate/tracking versus accuracy/robustness), this specific term of the NLMS algorithm can be further controlled and designed as a variable parameter. This represents the main motivation behind the development of variable step size algorithms. In this paper, we propose a variable step size NLMS (VSS-NLMS) algorithm that exploits the data reuse mechanism, which aims to improve the convergence rate/tracking of the algorithm by reusing the same set of data (i.e., the input and reference signals) several times. Nevertheless, we involved an equivalent version of the data reuse NLMS, which provides the convergence modes of the algorithm. Based on this approach, a sequence of normalized step sizes can be a priori scheduled, which is advantageous in terms of the computational complexity. The simulation results in the context of AEC supported the good performance features of the proposed VSS-NLMS algorithm.
      Citation: Algorithms
      PubDate: 2022-03-24
      DOI: 10.3390/a15040111
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 112: Editorial Paper for the Special Issue
           “Algorithms in Hyperspectral Data Analysis”

    • Authors: Raffaele Pizzolante
      First page: 112
      Abstract: This Special Issue contains four papers focused on hyperspectral data analysis [...]
      Citation: Algorithms
      PubDate: 2022-03-25
      DOI: 10.3390/a15040112
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 113: An LSM-Tree Index for Spatial Data

    • Authors: Junjun He, Huahui Chen
      First page: 113
      Abstract: An LSM-tree (log-structured merge-tree) is a hierarchical, orderly and disk-oriented data storage structure which makes full use of the characteristics of disk sequential writing, which are much better than those of random writing. However, an LSM-tree can only be queried by a key and cannot meet the needs of a spatial query. To improve the query efficiency of spatial data stored in LSM-trees, the traditional method is to introduce stand-alone tree-like secondary indexes, the problem with which is the read amplification brought about by dual index queries. Moreover, when more spatial data are stored, the index tree becomes increasingly large, bringing the problems of a lower query efficiency and a higher index update cost. To address the above problems, this paper proposes an ER-tree(embedded R-tree) index structure based on the orderliness of LSM-tree data. By building an SER-tree(embedded R-tree on an SSTable) index structure for each storage component, we optimised dual index queries into single and organised SER-tree indexes into an ER-tree index with a binary linked list. The experiments showed that the query performance of the ER-tree index was effectively improved compared to that of stand-alone R-tree indexes.
      Citation: Algorithms
      PubDate: 2022-03-25
      DOI: 10.3390/a15040113
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 114: EEG Pattern Classification of Picking and
           Coordination Using Anonymous Random Walks

    • Authors: Inon Zuckerman, Dor Mizrahi, Ilan Laufer
      First page: 114
      Abstract: Tacit coordination games are games where players are trying to select the same solution without any communication between them. Various theories have attempted to predict behavior in tacit coordination games. Until now, research combining tacit coordination games with electrophysiological measures was mainly based on spectral analysis. In contrast, EEG coherence enables the examination of functional and morphological connections between brain regions. Hence, we aimed to differentiate between different cognitive conditions using coherence patterns. Specifically, we have designed a method that predicts the class label of coherence graph patterns extracted out of multi-channel EEG epochs taken from three conditions: a no-task condition and two cognitive tasks, picking and coordination. The classification process was based on a coherence graph extracted out of the EEG record. To assign each graph into its appropriate label, we have constructed a hierarchical classifier. First, we have distinguished between the resting-state condition and the other two cognitive tasks by using a bag of node degrees. Next, to distinguish between the two cognitive tasks, we have implemented an anonymous random walk. Our classification model achieved a total accuracy value of 96.55%.
      Citation: Algorithms
      PubDate: 2022-03-26
      DOI: 10.3390/a15040114
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 115: Deep Learning Study of an Electromagnetic

    • Authors: Elihu Sela, Shan Huang, David Horn
      First page: 115
      Abstract: The accurate and precise extraction of information from a modern particle detector, such as an electromagnetic calorimeter, may be complicated and challenging. In order to overcome the difficulties, we process the simulated detector outputs using the deep-learning methodology. Our algorithmic approach makes use of a known network architecture, which has been modified to fit the problems at hand. The results are of high quality (biases of order 1 to 2%) and, moreover, indicate that most of the information may be derived from only a fraction of the detector. We conclude that such an analysis helps us understand the essential mechanism of the detector and should be performed as part of its design procedure.
      Citation: Algorithms
      PubDate: 2022-03-28
      DOI: 10.3390/a15040115
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 116: A Multitask Learning Framework for Abuse
           Detection and Emotion Classification

    • Authors: Yucheng Huang, Rui Song, Fausto Giunchiglia, Hao Xu
      First page: 116
      Abstract: The rapid development of online social media makes abuse detection a hot topic in the field of emotional computing. However, most natural language processing (NLP) methods only focus on linguistic features of posts and ignore the influence of users’ emotions. To tackle the problem, we propose a multitask framework combining abuse detection and emotion classification (MFAE) to expand the representation capability of the algorithm on the basis of the existing pretrained language model. Specifically, we use bidirectional encoder representation from transformers (BERT) as the encoder to generate sentence representation. Then, we used two different decoders for emotion classification and abuse detection, respectively. To further strengthen the influence of the emotion classification task on abuse detection, we propose a cross-attention (CA) component in the decoder, which further improves the learning effect of our multitask learning framework. Experimental results on five public datasets show that our method is superior to other state-of-the-art methods.
      Citation: Algorithms
      PubDate: 2022-03-28
      DOI: 10.3390/a15040116
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 117: Performance of Parallel K-Means Algorithms
           in Java

    • Authors: Libero Nigro
      First page: 117
      Abstract: K-means is a well-known clustering algorithm often used for its simplicity and potential efficiency. Its properties and limitations have been investigated by many works reported in the literature. K-means, though, suffers from computational problems when dealing with large datasets with many dimensions and great number of clusters. Therefore, many authors have proposed and experimented different techniques for the parallel execution of K-means. This paper describes a novel approach to parallel K-means which, today, is based on commodity multicore machines with shared memory. Two reference implementations in Java are developed and their performances are compared. The first one is structured according to a map/reduce schema that leverages the built-in multi-threaded concurrency automatically provided by Java to parallel streams. The second one, allocated on the available cores, exploits the parallel programming model of the Theatre actor system, which is control-based, totally lock-free, and purposely relies on threads as coarse-grain “programming-in-the-large” units. The experimental results confirm that some good execution performance can be achieved through the implicit and intuitive use of Java concurrency in parallel streams. However, better execution performance can be guaranteed by the modular Theatre implementation which proves more adequate for an exploitation of the computational resources.
      Citation: Algorithms
      PubDate: 2022-03-29
      DOI: 10.3390/a15040117
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 118: Boosting Iris Recognition by Margin-Based
           Loss Functions

    • Authors: Reihan Alinia Lat, Sebelan Danishvar, Hamed Heravi, Morad Danishvar
      First page: 118
      Abstract: In recent years, the topic of contactless biometric identification has gained considerable traction due to the COVID-19 pandemic. One of the most well-known identification technologies is iris recognition. Determining the classification threshold for large datasets of iris images remains challenging. To solve this issue, it is essential to extract more discriminatory features from iris images. Choosing the appropriate loss function to enhance discrimination power is one of the most significant factors in deep learning networks. This paper proposes a novel iris identification framework that integrates the light-weight MobileNet architecture with customized ArcFace and Triplet loss functions. By combining two loss functions, it is possible to improve the compactness within a class and the discrepancies between classes. To reduce the amount of preprocessing, the normalization step is omitted and segmented iris images are used directly. In contrast to the original SoftMax loss, the EER for the combined loss from ArcFace and Triplet is decreased from 1.11% to 0.45%, and the TPR is increased from 99.77% to 100%. In CASIA-Iris-Thousand, EER decreased from 4.8% to 1.87%, while TPR improved from 97.42% to 99.66%. Experiments have demonstrated that the proposed approach with customized loss using ArcFace and Triplet can significantly improve state-of-the-art and achieve outstanding results.
      Citation: Algorithms
      PubDate: 2022-03-29
      DOI: 10.3390/a15040118
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 119: False Information Detection via Multimodal
           Feature Fusion and Multi-Classifier Hybrid Prediction

    • Authors: Yi Liang, Turdi Tohti, Askar Hamdulla
      First page: 119
      Abstract: In the existing false information detection methods, the quality of the extracted single-modality features is low, the information between different modalities cannot be fully fused, and the original information will be lost when the information of different modalities is fused. This paper proposes a false information detection via multimodal feature fusion and multi-classifier hybrid prediction. In this method, first, bidirectional encoder representations for transformers are used to extract the text features, and S win-transformer is used to extract the picture features, and then, the trained deep autoencoder is used as an early fusion method of multimodal features to fuse text features and visual features, and the low-dimensional features are taken as the joint features of the multimodalities. The original features of each modality are concatenated into the joint features to reduce the loss of original information. Finally, the text features, image features and joint features are processed by three classifiers to obtain three probability distributions, and the three probability distributions are added proportionally to obtain the final prediction result. Compared with the attention-based multimodal factorized bilinear pooling, the model achieves 4.3% and 1.2% improvement in accuracy on Weibo dataset and Twitter dataset. The experimental results show that the proposed model can effectively integrate multimodal information and improve the accuracy of false information detection.
      Citation: Algorithms
      PubDate: 2022-03-29
      DOI: 10.3390/a15040119
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 120: Multi-Level Fusion Model for Person
           Re-Identification by Attribute Awareness

    • Authors: Shengyu Pei, Xiaoping Fan
      First page: 120
      Abstract: Existing person re-recognition (Re-ID) methods usually suffer from poor generalization capability and over-fitting problems caused by insufficient training samples. We find that high-level attributes, semantic information, and part-based local information alignment are useful for person Re-ID networks. In this study, we propose a person re-recognition network with part-based attribute-enhanced features. The model includes a multi-task learning module, local information alignment module, and global information learning module. The ResNet based on non-local and instance batch normalization (IBN) learns more discriminative feature representations. The multi-task module, local module, and global module are used in parallel for feature extraction. To better prevent over-fitting, the local information alignment module transforms pedestrian attitude alignment into local information alignment to assist in attribute recognition. Extensive experiments are carried out on the Market-1501 and DukeMTMC-reID datasets, whose results demonstrate that the effectiveness of the method is superior to most current algorithms.
      Citation: Algorithms
      PubDate: 2022-03-30
      DOI: 10.3390/a15040120
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 121: Combinatorial Integral Approximation
           Decompositions for Mixed-Integer Optimal Control

    • Authors: Clemens Zeile, Tobias Weber, Sebastian Sager
      First page: 121
      Abstract: Solving mixed-integer nonlinear programs (MINLPs) is hard from both a theoretical and practical perspective. Decomposing the nonlinear and the integer part is promising from a computational point of view. In general, however, no bounds on the objective value gap can be established and iterative procedures with potentially many subproblems are necessary. The situation is different for mixed-integer optimal control problems with binary variables that switch over time. Here, a priori bounds were derived for a decomposition into one continuous nonlinear control problem and one mixed-integer linear program, the combinatorial integral approximation (CIA) problem. In this article, we generalize and extend the decomposition idea. First, we derive different decompositions and analyze the implied a priori bounds. Second, we propose several strategies to recombine promising candidate solutions for the binary control functions in the original problem. We present the extensions for ordinary differential equations-constrained problems. These extensions are transferable in a straightforward way, though, to recently suggested variants for certain partial differential equations, for algebraic equations, for additional combinatorial constraints, and for discrete time problems. We implemented all algorithms and subproblems in AMPL for a proof-of-concept study. Numerical results show the improvement compared to the standard CIA decomposition with respect to objective function value and compared to general-purpose MINLP solvers with respect to runtime.
      Citation: Algorithms
      PubDate: 2022-03-31
      DOI: 10.3390/a15040121
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 122: Neuroevolution for Parameter Adaptation in
           Differential Evolution

    • Authors: Vladimir Stanovov, Shakhnaz Akhmedova, Eugene Semenkin
      First page: 122
      Abstract: Parameter adaptation is one of the key research fields in the area of evolutionary computation. In this study, the application of neuroevolution of augmented topologies to design efficient parameter adaptation techniques for differential evolution is considered. The artificial neural networks in this study are used for setting the scaling factor and crossover rate values based on the available information about the algorithm performance and previous successful values. The training is performed on a set of benchmark problems, and the testing and comparison is performed on several different benchmarks to evaluate the generalizing ability of the approach. The neuroevolution is enhanced with lexicase selection to handle the noisy fitness landscape of the benchmarking results. The experimental results show that it is possible to design efficient parameter adaptation techniques comparable to state-of-the-art methods, although such an automatic search for heuristics requires significant computational effort. The automatically designed solutions can be further analyzed to extract valuable knowledge about parameter adaptation.
      Citation: Algorithms
      PubDate: 2022-04-07
      DOI: 10.3390/a15040122
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 123: Cloud Computing in Free Route Airspace

    • Authors: Peter Szabó, Miroslava Ferencová, Vladimír Železník
      First page: 123
      Abstract: We use technical documentation, data structures, data, and algorithms in our research. These objects support our work, but we cannot offer a unique citation for each object. This paper proposes a method (for citation and reference management) to cite such supportive resources using Cloud Computing. According to the method, the publication cites only one source in the Cloud, and this source contains the Cloud schema, which describes the Cloud infrastructure. When we make a citation using the Cloud schema, we can pinpoint a cited object exactly. The proposed method supports open research; all research—Cloud items—is freely available. To illustrate the method, we applied it in the case of free route airspace (FRA) modelling. FRA is a new concept of Air Traffic Management and it is also the subject of our research.
      Citation: Algorithms
      PubDate: 2022-04-07
      DOI: 10.3390/a15040123
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 124: Point Cloud Upsampling Algorithm: A
           Systematic Review

    • Authors: Yan Zhang, Wenhan Zhao, Bo Sun, Ying Zhang, Wen Wen
      First page: 124
      Abstract: Point cloud upsampling algorithms can improve the resolution of point clouds and generate dense and uniform point clouds, and are an important image processing technology. Significant progress has been made in point cloud upsampling research in recent years. This paper provides a comprehensive survey of point cloud upsampling algorithms. We classify existing point cloud upsampling algorithms into optimization-based methods and deep learning-based methods, and analyze the advantages and limitations of different algorithms from a modular perspective. In addition, we cover some other important issues such as public datasets and performance evaluation metrics. Finally, we conclude this survey by highlighting several future research directions and open issues that should be further addressed.
      Citation: Algorithms
      PubDate: 2022-04-08
      DOI: 10.3390/a15040124
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 125: Computational Approaches for Grocery Home
           Delivery Services

    • Authors: Christian Truden, Kerstin Maier, Anna Jellen, Philipp Hungerländer
      First page: 125
      Abstract: The steadily growing popularity of grocery home-delivery services is most likely based on the convenience experienced by its customers. However, the perishable nature of the products imposes certain requirements during the delivery process. The customer must be present when the delivery arrives so that the delivery process can be completed without interrupting the cold chain. Therefore, the grocery retailer and the customer must mutually agree on a time window during which the delivery can be guaranteed. This concept is referred to as the attended home delivery (AHD) problem in the scientific literature. The phase during which customers place orders, usually through a web service, constitutes the computationally most challenging part of the logistical processes behind such services. The system must determine potential delivery time windows that can be offered to incoming customers and incrementally build the delivery schedule as new orders are placed. Typically, the underlying optimization problem is a vehicle routing problem with a time windows. This work is concerned with a case given by an international grocery retailer’s online shopping service. We present an analysis of several efficient solution methods that can be employed to AHD services. A framework for the operational planning tools required to tackle the order placement process is provided. However, the basic framework can easily be adapted to be used for many similar vehicle routing applications. We provide a comprehensive computational study comparing several algorithmic strategies, combining heuristics utilizing local search operations and mixed-integer linear programs, tackling the booking process. Finally, we analyze the scalability and suitability of the approaches.
      Citation: Algorithms
      PubDate: 2022-04-09
      DOI: 10.3390/a15040125
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 126: A Truly Robust Signal Temporal Logic:
           Monitoring Safety Properties of Interacting Cyber-Physical Systems under
           Uncertain Observation

    • Authors: Bernd Finkbeiner, Martin Fränzle, Florian Kohn, Paul Kröger
      First page: 126
      Abstract: Signal Temporal Logic is a linear-time temporal logic designed for classifying the time-dependent signals originating from continuous-state or hybrid-state dynamical systems according to formal specifications. It has been conceived as a tool for systematizing the monitoring of cyber-physical systems, supporting the automatic translation of complex safety specifications into monitoring algorithms, faithfully representing their semantics. Almost all algorithms hitherto suggested do, however, assume perfect identity between the sensor readings, informing the monitor about the system state and the actual ground truth. Only recently have Visconti et al. addressed the issue of inexact measurements, taking up the simple model of interval-bounded per-sample error that is unrelated, in the sense of chosen afresh, across samples. We expand their analysis by decomposing the error into an unknown yet fixed offset and an independent per-sample error and show that in this setting, monitoring of temporal properties no longer coincides with collecting Boolean combinations of state predicates evaluated in each time instant over best-possible per-sample state estimates, but can be genuinely more informative in that it infers determinate truth values for monitoring conditions that interval-based evaluation remains inconclusive about. For the model-free as well as for the linear model-based case, we provide optimal evaluation algorithms based on affine arithmetic and SAT modulo theory, solving over linear arithmetic. The resulting algorithms provide conclusive monitoring verdicts in many cases where state estimations inherently remain inconclusive. In their model-based variants, they can simultaneously address the issues of uncertain sensing and partial observation.
      Citation: Algorithms
      PubDate: 2022-04-11
      DOI: 10.3390/a15040126
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 127: A Statistical Approach to Discovering
           Process Regime Shifts and Their Determinants

    • Authors: Atiq W. Siddiqui, Syed Arshad Raza
      First page: 127
      Abstract: Systematic behavioral regime shifts inevitably emerge in real-world processes in response to various determinants, thus resulting in temporally dynamic responses. These determinants can be technical, such as process handling, design, or policy elements; or environmental, socio-economic or socio-technical in nature. This work proposes a novel two-stage methodology in which the first stage involves statistically identifying and dating all regime shifts in the time series process event logs. The second stage entails identifying contender determinants, which are statistically and temporally evaluated for their role in forming new behavioral regimes. The methodology is general, allowing varying process evaluation bases while putting minimal restrictions on process output data distribution. We demonstrated the efficacy of our approach via three cases of technical, socio-economic and socio-technical nature. The results show the presence of regime shifts in the output logs of these cases. Various determinants were identified and analyzed for their role in their formation. We found that some of the determinants indeed caused specific regime shifts, whereas others had no impact on their formation.
      Citation: Algorithms
      PubDate: 2022-04-13
      DOI: 10.3390/a15040127
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 128: A Fuzzy Grouping Genetic Algorithm for
           Solving a Real-World Virtual Machine Placement Problem in a

    • Authors: Nawaf Alharbe, Abeer Aljohani, Mohamed Ali Rakrouki
      First page: 128
      Abstract: Due to the large-scale development of cloud computing, data center electricity energy costs have increased rapidly. Energy saving has become a major research direction of virtual machine placement problems. At the same time, the multi-dimensional resources on the cloud should be used in a balanced manner in order to avoid resources waste. In this context, this paper addresses a real-world virtual machine placement problem arising in a Healthcare-Cloud (H-Cloud) of hospitals chain in Saudi Arabia, considering server power consumption and resource utilization. As a part of optimizing both objectives, user service quality has to be taken into account. In fact, user quality of service (QoS) is also considered by measuring the Service-Level Agreement (SLA) violation rate. This problem is modeled as a multi-objective virtual machine placement problem with the objective of minimizing power consumption, resource utilization, and SLA violation rate. To solve this challenging problem, a fuzzy grouping genetic algorithm (FGGA) is proposed. Considering that multiple optimization objectives may have different degrees of influence on the problem, the fitness function of the proposed algorithm is calculated with fuzzy logic-based function. The experimental results show the effectiveness of the proposed algorithm.
      Citation: Algorithms
      PubDate: 2022-04-14
      DOI: 10.3390/a15040128
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 129: Convolutional-Neural-Network-Based
           Handwritten Character Recognition: An Approach with Massive Multisource

    • Authors: Nazmus Saqib, Khandaker Foysal Haque, Venkata Prasanth Yanambaka, Ahmed Abdelgawad
      First page: 129
      Abstract: Neural networks have made big strides in image classification. Convolutional neural networks (CNN) work successfully to run neural networks on direct images. Handwritten character recognition (HCR) is now a very powerful tool to detect traffic signals, translate language, and extract information from documents, etc. Although handwritten character recognition technology is in use in the industry, present accuracy is not outstanding, which compromises both performance and usability. Thus, the character recognition technologies in use are still not very reliable and need further improvement to be extensively deployed for serious and reliable tasks. On this account, characters of the English alphabet and digit recognition are performed by proposing a custom-tailored CNN model with two different datasets of handwritten images, i.e., Kaggle and MNIST, respectively, which are lightweight but achieve higher accuracies than state-of-the-art models. The best two models from the total of twelve designed are proposed by altering hyper-parameters to observe which models provide the best accuracy for which dataset. In addition, the classification reports (CRs) of these two proposed models are extensively investigated considering the performance matrices, such as precision, recall, specificity, and F1 score, which are obtained from the developed confusion matrix (CM). To simulate a practical scenario, the dataset is kept unbalanced and three more averages for the F measurement (micro, macro, and weighted) are calculated, which facilitates better understanding of the performances of the models. The highest accuracy of 99.642% is achieved for digit recognition, with the model using ‘RMSprop’, at a learning rate of 0.001, whereas the highest detection accuracy for alphabet recognition is 99.563%, which is obtained with the proposed model using ‘ADAM’ optimizer at a learning rate of 0.00001. The macro F1 and weighted F1 scores for the best two models are 0.998, 0.997:0.992, and 0.996, respectively, for digit and alphabet recognition.
      Citation: Algorithms
      PubDate: 2022-04-14
      DOI: 10.3390/a15040129
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 130: Machine Learning Algorithms: An
           Experimental Evaluation for Decision Support Systems

    • Authors: Hugo Silva, Jorge Bernardino
      First page: 130
      Abstract: Decision support systems with machine learning can help organizations improve operations and lower costs with more precision and efficiency. This work presents a review of state-of-the-art machine learning algorithms for binary classification and makes a comparison of the related metrics between them with their application to a public diabetes and human resource datasets. The two mainly used categories that allow the learning process without requiring explicit programming are supervised and unsupervised learning. For that, we use Scikit-learn, the free software machine learning library for Python language. The best-performing algorithm was Random Forest for supervised learning, while in unsupervised clustering techniques, Balanced Iterative Reducing and Clustering Using Hierarchies and Spectral Clustering algorithms presented the best results. The experimental evaluation shows that the application of unsupervised clustering algorithms does not translate into better results than with supervised algorithms. However, the application of unsupervised clustering algorithms, as the preprocessing of the supervised techniques, can translate into a boost of performance.
      Citation: Algorithms
      PubDate: 2022-04-15
      DOI: 10.3390/a15040130
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 131: Multi-Fidelity Gradient-Based Optimization
           for High-Dimensional Aeroelastic Configurations

    • Authors: Andrew S. Thelen, Dean E. Bryson, Bret K. Stanford, Philip S. Beran
      First page: 131
      Abstract: The simultaneous optimization of aircraft shape and internal structural size for transonic flight is excessively costly. The analysis of the governing physics is expensive, in particular for highly flexible aircraft, and the search for optima using analysis samples can scale poorly with design space size. This paper has a two-fold purpose targeting the scalable reduction of analysis sampling. First, a new algorithm is explored for computing design derivatives by analytically linking objective definition, geometry differentiation, mesh construction, and analysis. The analytic computation of design derivatives enables the accurate use of more efficient gradient-based optimization methods. Second, the scalability of a multi-fidelity algorithm is assessed for optimization in high dimensions. This method leverages a multi-fidelity model during the optimization line search for further reduction of sampling costs. The multi-fidelity optimization is demonstrated for cases of aerodynamic and aeroelastic design considering both shape and structural sizing separately and in combination with design spaces ranging from 17 to 321 variables, which would be infeasible using typical, surrogate-based methods. The multi-fidelity optimization consistently led to a reduction in high-fidelity evaluations compared to single-fidelity optimization for the aerodynamic shape problems, but frequently resulted in a cost penalty for cases involving structural sizing. While the multi-fidelity optimizer was successfully applied to problems with hundreds of variables, the results underscore the importance of accurately computing gradients and motivate the extension of the approach to constrained optimization methods.
      Citation: Algorithms
      PubDate: 2022-04-16
      DOI: 10.3390/a15040131
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 132: Optimizing Finite-Difference Operator in
           Seismic Wave Numerical Modeling

    • Authors: Hui Li, Yuan Fang, Zhiguo Huang, Mengyao Zhang, Qing Wei
      First page: 132
      Abstract: The finite-difference method is widely used in seismic wave numerical simulation, imaging, and waveform inversion. In the finite-difference method, the finite difference operator is used to replace the differential operator approximately, which can be obtained by truncating the spatial convolution series. The properties of the truncated window function, such as the main and side lobes of the window function’s amplitude response, determine the accuracy of finite-difference, which subsequently affects the seismic imaging and inversion results significantly. Although numerical dispersion is inevitable in this process, it can be suppressed more effectively by using higher precision finite-difference operators. In this paper, we use the krill herd algorithm, in contrast with the standard PSO and CDPSO (a variant of PSO), to optimize the finite-difference operator. Numerical simulation results verify that the krill herd algorithm has good performance in improving the precision of the differential operator.
      Citation: Algorithms
      PubDate: 2022-04-18
      DOI: 10.3390/a15040132
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 133: Study of the Algorithm for Wind Shear
           Detection with Lidar Based on Shear Intensity Factor

    • Authors: Shijun Zhao, Yulong Shan
      First page: 133
      Abstract: Low-level wind shear is a vital weather process affecting aircraft safety while taking off and landing and is known as the “aircraft killer” in the aviation industry. As a result, effective monitoring and warning are required. Several ramps detection algorithms for low-level wind shear based on glide path scanning of lidar have been developed, including double and simple ramp detection, with the ramp length extension and contraction strategies corresponding to the algorithm. However, current algorithms must be improved to determine the maximum shear value and location. In this paper, a new efficient algorithm based on the shear intensity factor value is presented, in which wind speed changes and distance are both considered when calculating wind shear. Simultaneously, the effectiveness of the improved algorithm has been validated through numerical simulation experiments. Results reveal that the improved algorithm can determine the maximum intensity value and wind shear location more accurately than the traditional algorithm. In addition, the new algorithm improved the detection ability of lidar for weak wind shear.
      Citation: Algorithms
      PubDate: 2022-04-18
      DOI: 10.3390/a15040133
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 134: Research and Challenges of Reinforcement
           Learning in Cyber Defense Decision-Making for Intranet Security

    • Authors: Wenhao Wang, Dingyuanhao Sun, Feng Jiang, Xingguo Chen, Cheng Zhu
      First page: 134
      Abstract: In recent years, cyber attacks have shown diversified, purposeful, and organized characteristics, which pose significant challenges to cyber defense decision-making on internal networks. Due to the continuous confrontation between attackers and defenders, only using data-based statistical or supervised learning methods cannot cope with increasingly severe security threats. It is urgent to rethink network defense from the perspective of decision-making, and prepare for every possible situation. Reinforcement learning has made great breakthroughs in addressing complicated decision-making problems. We propose a framework that defines four modules based on the life cycle of threats: pentest, design, response, recovery. Our aims are to clarify the problem boundary of network defense decision-making problems, to study the problem characteristics in different contexts, to compare the strengths and weaknesses of existing research, and to identify promising challenges for future work. Our work provides a systematic view for understanding and solving decision-making problems in the application of reinforcement learning to cyber defense.
      Citation: Algorithms
      PubDate: 2022-04-18
      DOI: 10.3390/a15040134
      Issue No: Vol. 15, No. 4 (2022)
  • Algorithms, Vol. 15, Pages 75: Machine Learning in Cereal Crops Disease
           Detection: A Review

    • Authors: Fraol Gelana Waldamichael, Taye Girma Debelee, Friedhelm Schwenker, Yehualashet Megersa Ayano, Samuel Rahimeto Kebede
      First page: 75
      Abstract: Cereals are an important and major source of the human diet. They constitute more than two-thirds of the world’s food source and cover more than 56% of the world’s cultivatable land. These important sources of food are affected by a variety of damaging diseases, causing significant loss in annual production. In this regard, detection of diseases at an early stage and quantification of the severity has acquired the urgent attention of researchers worldwide. One emerging and popular approach for this task is the utilization of machine learning techniques. In this work, we have identified the most common and damaging diseases affecting cereal crop production, and we also reviewed 45 works performed on the detection and classification of various diseases that occur on six cereal crops within the past five years. In addition, we identified and summarised numerous publicly available datasets for each cereal crop, which the lack thereof we identified as the main challenges faced for researching the application of machine learning in cereal crop detection. In this survey, we identified deep convolutional neural networks trained on hyperspectral data as the most effective approach for early detection of diseases and transfer learning as the most commonly used and yielding the best result training method.
      Citation: Algorithms
      PubDate: 2022-02-24
      DOI: 10.3390/a15030075
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 76: Partitioning of Transportation Networks by
           Efficient Evolutionary Clustering and Density Peaks

    • Authors: Pamela Al Alam, Joseph Constantin, Ibtissam Constantin, Clelia Lopez
      First page: 76
      Abstract: Road traffic congestion has became a major problem in most countries because it affects sustainable mobility. Partitioning a transport network into homogeneous areas can be very useful for monitoring traffic as congestion is spatially correlated in adjacent roads, and it propagates at different speeds as a function of time. Spectral clustering has been successfully applied for the partitioning of transportation networks based on the spatial characteristics of congestion at a specific time. However, this type of classification is not suitable for data that change over time. Evolutionary spectral clustering represents a state-of-the-art algorithm for grouping objects evolving over time. However, the disadvantages of this algorithm are the cubic time complexity and the high memory demand, which make it insufficient to handle a large number of data sets. In this paper, we propose an efficient evolutionary spectral clustering algorithm that solves the drawbacks of evolutionary spectral clustering by reducing the size of the eigenvalue problem. This algorithm is applied in a dynamic environment to partition a transportation network into connected homogeneous regions that evolve with time. The number of clusters is selected automatically by using a density peak algorithm adopted for the classification of traffic congestion based on the sparse snake similarity matrix. Experiments on the real network of Amsterdam city demonstrate the superiority of the proposed algorithm in robustness and effectiveness.
      Citation: Algorithms
      PubDate: 2022-02-24
      DOI: 10.3390/a15030076
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 77: Prediction of Injuries in CrossFit
           Training: A Machine Learning Perspective

    • Authors: Serafeim Moustakidis, Athanasios Siouras, Konstantinos Vassis, Ioannis Misiris, Elpiniki Papageorgiou, Dimitrios Tsaopoulos
      First page: 77
      Abstract: CrossFit has gained recognition and interest among physically active populations being one of the most popular and rapidly growing exercise regimens worldwide. Due to the intense and repetitive nature of CrossFit, concerns have been raised over the potential injury risks that are associated with its training including rhabdomyolysis and musculoskeletal injuries. However, identification of risk factors for predicting injuries in CrossFit athletes has been limited by the absence of relevant big epidemiological studies. The main purpose of this paper is the identification of risk factors and the development of machine learning-based models using ensemble learning that can predict CrossFit injuries. To accomplish the aforementioned targets, a survey-based epidemiological study was conducted in Greece to collect data on musculoskeletal injuries in CrossFit practitioners. A Machine Learning (ML) pipeline was then implemented that involved data pre-processing, feature selection and well-known ML models. The performance of the proposed ML models was assessed using a comprehensive cross validation mechanism whereas a discussion on the nature of the selected features is also provided. An area under the curve (AUC) of 77.93% was achieved by the best ML model using ensemble learning (Adaboost) on the group of six selected risk factors. The effectiveness of the proposed approach was evaluated in a comparative analysis with respect to numerous performance metrics including accuracy, sensitivity, specificity, AUC and confusion matrices to confirm its clinical relevance. The results are the basis for the development of reliable tools for the prediction of injuries in CrossFit.
      Citation: Algorithms
      PubDate: 2022-02-24
      DOI: 10.3390/a15030077
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 78: Deterministic Approximate EM Algorithm;
           Application to the Riemann Approximation EM and the Tempered EM

    • Authors: Thomas Lartigue, Stanley Durrleman, Stéphanie Allassonnière
      First page: 78
      Abstract: The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte Carlo or tempered approximations, etc. Most of the well-studied approximations belong to the stochastic class. By comparison, the literature is lacking when it comes to deterministic approximations. In this paper, we introduce a theoretical framework, with state-of-the-art convergence guarantees, for any deterministic approximation of the E step. We analyse theoretically and empirically several approximations that fit into this framework. First, for intractable E-steps, we introduce a deterministic version of MC-EM using Riemann sums. A straightforward method, not requiring any hyper-parameter fine-tuning, useful when the low dimensionality does not warrant a MC-EM. Then, we consider the tempered approximation, borrowed from the Simulated Annealing literature and used to escape local extrema. We prove that the tempered EM verifies the convergence guarantees for a wider range of temperature profiles than previously considered. We showcase empirically how new non-trivial profiles can more successfully escape adversarial initialisations. Finally, we combine the Riemann and tempered approximations into a method that accomplishes both their purposes.
      Citation: Algorithms
      PubDate: 2022-02-25
      DOI: 10.3390/a15030078
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 79: An Effective Algorithm for Finding Shortest
           Paths in Tubular Spaces

    • Authors: Dang-Viet-Anh Nguyen, Jérôme Szewczyk, Kanty Rabenorosoa
      First page: 79
      Abstract: We propose a novel algorithm to determine the Euclidean shortest path (ESP) from a given point (source) to another point (destination) inside a tubular space. The method is based on the observation data of a virtual particle (VP) assumed to move along this path. In the first step, the geometric properties of the shortest path inside the considered space are presented and proven. Utilizing these properties, the desired ESP can be segmented into three partitions depending on the visibility of the VP. Our algorithm will check which partition the VP belongs to and calculate the correct direction of its movement, and thus the shortest path will be traced. The proposed method is then compared to Dijkstra’s algorithm, considering different types of tubular spaces. In all cases, the solution provided by the proposed algorithm is smoother, shorter, and has a higher accuracy with a faster calculation speed than that obtained by Dijkstra’s method.
      Citation: Algorithms
      PubDate: 2022-02-25
      DOI: 10.3390/a15030079
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 80: Predicting Dynamic User–Item
           Interaction with Meta-Path Guided Recursive RNN

    • Authors: Yi Liu, Chengyu Yin, Jingwei Li, Fang Wang, Senzhang Wang
      First page: 80
      Abstract: Accurately predicting user–item interactions is critically important in many real applications, including recommender systems and user behavior analysis in social networks. One major drawback of existing studies is that they generally directly analyze the sparse user–item interaction data without considering their semantic correlations and the structural information hidden in the data. Another limitation is that existing approaches usually embed the users and items into the different embedding spaces in a static way, but ignore the dynamic characteristics of both users and items. In this paper, we propose to learn the dynamic embedding vector trajectories rather than the static embedding vectors for users and items simultaneously. A Metapath-guided Recursive RNN based Shift embedding method named MRRNN-S is proposed to learn the continuously evolving embeddings of users and items for more accurately predicting their future interactions. The proposed MRRNN-S is extended from our previous model RRNN-S which was proposed in the earlier work. Comparedwith RRNN-S, we add the word2vec module and the skip-gram-based meta-path module to better capture the rich auxiliary information from the user–item interaction data. Specifically, we first regard the interaction data of each user with items as sentence data to model their semantic and sequential information and construct the user–item interaction graph. Then we sample the instances of meta-paths to capture the heterogeneity and structural information from the user–item interaction graph. A recursive RNN is proposed to iteratively and mutually learn the dynamic user and item embeddings in the same latent space based on their historical interactions. Next, a shift embedding module is proposed to predict the future user embeddings. To predict which item a user will interact with, we output the item embedding instead of the pairwise interaction probability between users and items, which is much more efficient. Through extensive experiments on three real-world datasets, we demonstrate that MRRNN-S achieves superior performance by extensive comparison with state-of-the-art baseline models.
      Citation: Algorithms
      PubDate: 2022-02-28
      DOI: 10.3390/a15030080
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 81: Long-Term Visitation Value for Deep
           Exploration in Sparse-Reward Reinforcement Learning

    • Authors: Simone Parisi, Davide Tateo, Maximilian Hensel, Carlo D’Eramo, Jan Peters, Joni Pajarinen
      First page: 81
      Abstract: Reinforcement learning with sparse rewards is still an open challenge. Classic methods rely on getting feedback via extrinsic rewards to train the agent, and in situations where this occurs very rarely the agent learns slowly or cannot learn at all. Similarly, if the agent receives also rewards that create suboptimal modes of the objective function, it will likely prematurely stop exploring. More recent methods add auxiliary intrinsic rewards to encourage exploration. However, auxiliary rewards lead to a non-stationary target for the Q-function. In this paper, we present a novel approach that (1) plans exploration actions far into the future by using a long-term visitation count, and (2) decouples exploration and exploitation by learning a separate function assessing the exploration value of the actions. Contrary to existing methods that use models of reward and dynamics, our approach is off-policy and model-free. We further propose new tabular environments for benchmarking exploration in reinforcement learning. Empirical results on classic and novel benchmarks show that the proposed approach outperforms existing methods in environments with sparse rewards, especially in the presence of rewards that create suboptimal modes of the objective function. Results also suggest that our approach scales gracefully with the size of the environment.
      Citation: Algorithms
      PubDate: 2022-02-28
      DOI: 10.3390/a15030081
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 82: Non-Invasive Systems and Methods Patents
           Review Based on Electrocardiogram for Diagnosis of Cardiovascular Diseases

    • Authors: Nellyzeth Flores, Marco A. Reyna, Roberto L. Avitia, Jose Antonio Cardenas-Haro, Conrado Garcia-Gonzalez
      First page: 82
      Abstract: Cardiovascular disease (CVD) is a global public health problem. It is a disease of multifactorial origin, and with this characteristic, having an accurate diagnosis of its incidence is a problem that health personnel face every day. That is why having all the indispensable tools to achieve optimal results is of utmost importance. Time is an essential factor when identifying heart problems, specialists look for and develop options to improve this aspect, which requires a thorough analysis of the patient, electrocardiograms being the factor standard for diagnosis and monitoring of patients. In this paper, we review patents and combined systems for the analysis of existing electrocardiogram signals, specific to cardiovascular diseases. All these methods and equipment have the purpose of giving an accurate diagnosis and a prediction of the presence of CVD in patients with positive risk factors. These are considered as the first diagnostic option, based on the guidelines already established in the field of preventive cardiology. The methodology consists of the searching of specific electrocardiography and cardiovascular disease subjects, taking as a reference the use of various patent databases. A total of 2634 patents were obtained in the consulted databases. Of that total, only 30 patents that met all the previous criteria were considered; furthermore, a second in-depth review of their information was conducted. It is expected that studying and reviewing these patents will allow us to know the variety of tools available for the different pathologies that make up CVD, not only for its immediate diagnosis because, as mentioned, the time factor is decisive for the best forecast but also to allow us to follow up on all the cases that arise, being able to provide a better quality of life to patients with CVD or even being able to lead them to a full recovery.
      Citation: Algorithms
      PubDate: 2022-02-28
      DOI: 10.3390/a15030082
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 83: Detection of Insulators on Power
           Transmission Line Based on an Improved Faster Region-Convolutional Neural

    • Authors: Haijian Hu, Yicen Liu, Haina Rong
      First page: 83
      Abstract: Detecting insulators on a power transmission line is of great importance for the safe operation of power systems. Aiming at the problem of the missed detection and misjudgment of the original feature extraction network VGG16 of a faster region-convolutional neural network (R-CNN) in the face of insulators of different sizes, in order to improve the accuracy of insulators’ detection on power transmission lines, an improved faster R-CNN algorithm is proposed. The improved algorithm replaces the original backbone feature extraction network VGG16 in faster R-CNN with the Resnet50 network with deeper layers and a more complex structure, adding an efficient channel attention module based on the channel attention mechanism. Experimental results show that the feature extraction performance has been effectively improved through the improvement of the backbone feature extraction network. The network model is trained on a training set consisting of 6174 insulator pictures, and is tested on a testing set consisting of 686 pictures. Compared with the traditional faster R-CNN, the mean average precision of the improved faster R-CNN increases to 89.37%, with an improvement of 1.63%.
      Citation: Algorithms
      PubDate: 2022-03-01
      DOI: 10.3390/a15030083
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 84: Eye Fatigue Detection through Machine
           Learning Based on Single Channel Electrooculography

    • Authors: Yuqi Wang, Lijun Zhang, Zhen Fang
      First page: 84
      Abstract: Nowadays, eye fatigue is becoming more common globally. However, there was no objective and effective method for eye fatigue detection except the sample survey questionnaire. An eye fatigue detection method by machine learning based on the Single-Channel Electrooculography-based System is proposed. Subjects are required to finish the industry-standard questionnaires of eye fatigue; the results are used as data labels. Then, we collect their electrooculography signals through a single-channel device. From the electrooculography signals, the five most relevant feature values of eye fatigue are extracted. A machine learning model that uses the five feature values as its input is designed for eye fatigue detection. Experimental results show that there is an objective link between electrooculography and eye fatigue. This method could be used in daily eye fatigue detection and it is promised in the future.
      Citation: Algorithms
      PubDate: 2022-03-03
      DOI: 10.3390/a15030084
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 85: Dynamic Layout Design Optimization to
           Improve Patient Flow in Outpatient Clinics Using Genetic Algorithms

    • Authors: Jyoti R. Munavalli, Shyam Vasudeva Rao, Aravind Srinivasan, Frits Van Merode
      First page: 85
      Abstract: Evolutionary algorithms, such as genetic algorithms have been used in various optimization problems. In this paper, we propose to apply this algorithm to obtain the layout design/redesign in order to improve the patient flow in an outpatient clinic. Layout designs are planned considering long-term requirements whereas the layout keeps modifying as per short-term demands. Over a period of time, the layout often does not remain efficient. Therefore, there is a need for such a model that helps in decision making on layout redesigns, and it must also optimize workflow by incorporating the flow constraints. In this study, we propose to minimize the waiting times by obtaining optimal and sub-optimal layout designs. A genetic algorithm is implemented to redesign the layouts based on the changing dynamics of patient demand, clinical pathways and services offered. The workflow is simulated with current layout and optimized layouts, and the results in terms of waiting time and cycle time are compared. The study shows that when layout design or redesign incorporate the workflow and pathways along with associated constraints, improves waiting time and cycle time of patients in the outpatient clinic. The distance between the departments/locations is translated to travelling time and overall travel distance/time is minimized by rearranging the allocations of departments to the location through genetic algorithms.
      Citation: Algorithms
      PubDate: 2022-03-06
      DOI: 10.3390/a15030085
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 86: Prediction of Intrinsically Disordered
           Proteins Using Machine Learning Based on Low Complexity Methods

    • Authors: Xingming Zeng, Haiyuan Liu, Hao He
      First page: 86
      Abstract: Prediction of intrinsic disordered proteins is a hot area in the field of bio-information. Due to the high cost of evaluating the disordered regions of protein sequences using experimental methods, we used a low-complexity prediction scheme. Sequence complexity is used in this scheme to calculate five features for each residue of the protein sequence, including the Shannon entropy, the Topo-logical entropy, the Permutation entropy and the weighted average values of two propensities. Particularly, this is the first time that permutation entropy has been applied to the field of protein sequencing. In addition, in the data preprocessing stage, an appropriately sized sliding window and a comprehensive oversampling scheme can be used to improve the prediction performance of our scheme, and two ensemble learning algorithms are also used to verify the prediction results before and after. The results show that adding permutation entropy improves the performance of the prediction algorithm, in which the MCC value can be improved from the original 0.465 to 0.526 in our scheme, proving its universality. Finally, we compare the simulation results of our scheme with those of some existing schemes to demonstrate its effectiveness.
      Citation: Algorithms
      PubDate: 2022-03-08
      DOI: 10.3390/a15030086
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 87: A Seed-Guided Latent Dirichlet Allocation
           Approach to Predict the Personality of Online Users Using the PEN Model

    • Authors: Saravanan Sagadevan, Nurul Hashimah Ahamed Hassain Malim, Mohd Heikal Husin
      First page: 87
      Abstract: There is a growing interest in topic modeling to decipher the valuable information embedded in natural texts. However, there are no studies training an unsupervised model to automatically categorize the social networks (SN) messages according to personality traits. Most of the existing literature relied on the Big 5 framework and psychological reports to recognize the personality of users. Furthermore, collecting datasets for other personality themes is an inherent problem that requires unprecedented time and human efforts, and it is bounded with privacy constraints. Alternatively, this study hypothesized that a small set of seed words is enough to decipher the psycholinguistics states encoded in texts, and the auxiliary knowledge could synergize the unsupervised model to categorize the messages according to human traits. Therefore, this study devised a dataless model called Seed-guided Latent Dirichlet Allocation (SLDA) to categorize the SN messages according to the PEN model that comprised Psychoticism, Extraversion, and Neuroticism traits. The intrinsic evaluations were conducted to determine the performance and disclose the nature of texts generated by SLDA, especially in the context of Psychoticism. The extrinsic evaluations were conducted using several machine learning classifiers to posit how well the topic model has identified latent semantic structure that persists over time in the training documents. The findings have shown that SLDA outperformed other models by attaining a coherence score up to 0.78, whereas the machine learning classifiers can achieve precision up to 0.993. We also will be shared the corpus generated by SLDA for further empirical studies.
      Citation: Algorithms
      PubDate: 2022-03-08
      DOI: 10.3390/a15030087
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 88: Research on Agricultural Machinery Rental
           Optimization Based on the Dynamic Artificial Bee-Ant Colony Algorithm

    • Authors: Jialin Hou, Jingtao Zhang, Wanying Wu, Tianguo Jin, Kai Zhou
      First page: 88
      Abstract: Agricultural machinery rental is a new service form that uses big data in agriculture to improve the utilization rate of agricultural machinery and promote the development of the agricultural economy. To realize agricultural machinery scheduling optimization in cloud services, a dynamic artificial bee-ant colony algorithm (DABAA) is proposed to solve the above problem. First, to improve the practicability of the mathematical model in agricultural production, a dynamic coefficient is proposed. Then the mutation operation is combined with the artificial bee colony (ABC) algorithm to improve the algorithm. Then, iterative threshold adjustment and optimal fusion point evaluation are used to combine the ABC algorithm with the ant colony optimization (ACO) algorithm, which not only improves the search precision but also improves the running speed. Finally, two groups of comparison experiments are carried out, and the results show that the DABAA can obviously improve the running speed and accuracy of cloud services in agricultural machinery rental.
      Citation: Algorithms
      PubDate: 2022-03-08
      DOI: 10.3390/a15030088
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 89: A Contrastive Learning Method for the
           Visual Representation of 3D Point Clouds

    • Authors: Feng Zhu, Jieyu Zhao, Zhengyi Cai
      First page: 89
      Abstract: At present, the unsupervised visual representation learning of the point cloud model is mainly based on generative methods, but the generative methods pay too much attention to the details of each point, thus ignoring the learning of semantic information. Therefore, this paper proposes a discriminative method for the contrastive learning of three-dimensional point cloud visual representations, which can effectively learn the visual representation of point cloud models. The self-attention point cloud capsule network is designed as the backbone network, which can effectively extract the features of point cloud data. By compressing the digital capsule layer, the class dependence of features is eliminated, and the generalization ability of the model and the ability of feature queues to store features are improved. Aiming at the equivariance of the capsule network, the Jaccard loss function is constructed, which is conducive to the network distinguishing the characteristics of positive and negative samples, thereby improving the performance of the contrastive learning. The model is pre-trained on the ShapeNetCore data set, and the pre-trained model is used for classification and segmentation tasks. The classification accuracy on the ModelNet40 data set is 0.1% higher than that of the best unsupervised method, PointCapsNet, and when only 10% of the label data is used, the classification accuracy exceeds 80%. The mIoU of part segmentation on the ShapeNet data set is 1.2% higher than the best comparison method, MulUnsupervised. The experimental results of classification and segmentation show that the proposed method has good performance in accuracy. The alignment and uniformity of features are better than the generative method of PointCapsNet, which proves that this method can learn the visual representation of the three-dimensional point cloud model more effectively.
      Citation: Algorithms
      PubDate: 2022-03-08
      DOI: 10.3390/a15030089
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 90: Kleene Algebra to Compute Invariant Sets of
           Dynamical Systems

    • Authors: Thomas Le Mézo, Luc Jaulin, Damien Massé, Benoit Zerr
      First page: 90
      Abstract: In this paper, we show that a basic fixed point method used to enclose the greatest fixed point in a Kleene algebra will allow us to compute inner and outer approximations of invariant-based sets for continuous-time nonlinear dynamical systems. Our contribution is to provide the definitions and theorems that will allow us to make the link between the theory of invariant sets and the Kleene algebra. This link has never be done before and will allow us to compute rigorously sets that can be defined as a combination of positive invariant sets. Some illustrating examples show the nice properties of the approach.
      Citation: Algorithms
      PubDate: 2022-03-08
      DOI: 10.3390/a15030090
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 91: Analysis of Explainable Goal-Driven
           Reinforcement Learning in a Continuous Simulated Environment

    • Authors: Ernesto Portugal, Francisco Cruz, Angel Ayala, Bruno Fernandes
      First page: 91
      Abstract: Currently, artificial intelligence is in an important period of growth. Due to the technology boom, it is now possible to solve problems that could not be resolved previously. For example, through goal-driven learning, it is possible that intelligent machines or agents may be able to perform tasks without human intervention. However, this also leads to the problem of understanding the agent’s decision making. Therefore, explainable goal-driven learning attempts to eliminate this gap. This work focuses on the adaptability of two explainability methods in continuous environments. The methods based on learning and introspection proposed a probability value for success to explain the agent’s behavior. These had already been tested in discrete environments. The continuous environment used in this study is the car-racing problem. This is a simulated car racing game that forms part of the Python Open AI Gym Library. The agents in this environment were trained with the Deep Q-Network algorithm, and in parallel the explainability methods were implemented. This research included a proposal for carrying out the adaptation and implementation of these methods in continuous states. The adaptation of the learning method produced major changes, implemented through an artificial neural network. The obtained probabilities of both methods were consistent throughout the experiments. The probability result was greater in the learning method. In terms of computational resources, the introspection method was slightly better than its counterpart.
      Citation: Algorithms
      PubDate: 2022-03-09
      DOI: 10.3390/a15030091
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 92: Mean Estimation on the Diagonal of Product

    • Authors: Mathias Højgaard Jensen, Stefan Sommer
      First page: 92
      Abstract: Computing sample means on Riemannian manifolds is typically computationally costly, as exemplified by computation of the Fréchet mean, which often requires finding minimizing geodesics to each data point for each step of an iterative optimization scheme. When closed-form expressions for geodesics are not available, this leads to a nested optimization problem that is costly to solve. The implied computational cost impacts applications in both geometric statistics and in geometric deep learning. The weighted diffusion mean offers an alternative to the weighted Fréchet mean. We show how the diffusion mean and the weighted diffusion mean can be estimated with a stochastic simulation scheme that does not require nested optimization. We achieve this by conditioning a Brownian motion in a product manifold to hit the diagonal at a predetermined time. We develop the theoretical foundation for the sampling-based mean estimation, we develop two simulation schemes, and we demonstrate the applicability of the method with examples of sampled means on two manifolds.
      Citation: Algorithms
      PubDate: 2022-03-10
      DOI: 10.3390/a15030092
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 93: Ensemble Machine Learning Model to Predict
           the Waterborne Syndrome

    • Authors: Mohammed Gollapalli
      First page: 93
      Abstract: The COVID-19 epidemic has highlighted the significance of sanitization and maintaining hygienic access to clean water to reduce mortality and morbidity cases worldwide. Diarrhea is one of the prevalent waterborne diseases caused due to contaminated water in many low-income countries with similar living conditions. According to the latest statistics from the World Health Organization (WHO), diarrhea is among the top five primary causes of death worldwide in low-income nations. The condition affects people in every age group due to a lack of proper water used for daily living. In this study, a stacking ensemble machine learning model was employed against traditional models to extract clinical knowledge for better understanding patients’ characteristics; disease prevalence; hygienic conditions; quality of water used for cooking, bathing, and toiletries; chemicals used; therapist’s medications; and symptoms that are reflected in the field study data. Results revealed that the ensemble model provides higher accuracy with 98.90% as part of training and testing phases when experimented against frequently used J48, Naïve Bayes, SVM, NN, PART, Random Forest, and Logistic Regression models. Managing outcomes of this research in the early stages could assist people in low-income countries to have a better lifestyle, fewer infections, and minimize expensive hospital visits.
      Citation: Algorithms
      PubDate: 2022-03-11
      DOI: 10.3390/a15030093
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 94: Mechanical Fault Prognosis through Spectral
           Analysis of Vibration Signals

    • Authors: Kang Wang, Zhi-Jiang Xu, Yi Gong, Ke-Lin Du
      First page: 94
      Abstract: Vibration signal analysis is the most common technique used for mechanical vibration monitoring. By using vibration sensors, the fault prognosis of rotating machinery provides a way to detect possible machine damage at an early stage and prevent property losses by taking appropriate measures. We first propose a digital integrator in frequency domain by combining fast Fourier transform with digital filtering. The velocity and displacement signals are, respectively, obtained from an acceleration signal by means of two digital integrators. We then propose a fast method for the calculation of the envelope spectra and instantaneous frequency by using the spectral properties of the signals. Cepstrum is also introduced in order to detect the unidentifiable periodic signal in the power spectrum. Further, a fault prognosis algorithm is presented by exploiting these spectral analyses. Finally, we design and implement a visualized real-time vibration analyzer on a Raspberry Pi embedded system, where our fault prognosis algorithm is the core algorithm. The real-time signals of acceleration, velocity, displacement of vibration, as well as their corresponding spectra and statistics, are visualized. The developed fault prognosis system has been successfully deployed in a water company.
      Citation: Algorithms
      PubDate: 2022-03-15
      DOI: 10.3390/a15030094
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 95: Prediction of Harvest Time of Apple Trees:
           An RNN-Based Approach

    • Authors: Tiago Boechel, Lucas Micol Policarpo, Gabriel de Oliveira Ramos, Rodrigo da Rosa Righi, Dhananjay Singh
      First page: 95
      Abstract: In the field of agricultural research, Machine Learning (ML) has been used to increase agricultural productivity and minimize its environmental impact, proving to be an essential technique to support decision making. Accurate harvest time prediction is a challenge for fruit production in a sustainable manner, which could eventually reduce food waste. Linear models have been used to estimate period duration; however, they present variability when used to estimate the chronological time of apple tree stages. This study proposes the PredHarv model, which is a machine learning model that uses Recurrent Neural Networks (RNN) to predict the start date of the apple harvest, given the weather conditions related to the temperature expected for the period. Predictions are made from the phenological phase of the beginning of flowering, using a multivariate approach, based on the time series of phenology and meteorological data. The computational model contributes to anticipating information about the harvest date, enabling the grower to better plan activities, avoiding costs, and consequently improving productivity. We developed a prototype of the model and performed experiments with real datasets from agricultural institutions. We evaluated the metrics, and the results obtained in evaluation scenarios demonstrate that the model is efficient, has good generalizability, and is capable of improving the accuracy of the prediction results.
      Citation: Algorithms
      PubDate: 2022-03-18
      DOI: 10.3390/a15030095
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 96: A Dynamic Distributed Deterministic
           Load-Balancer for Decentralized Hierarchical Infrastructures

    • Authors: Spyros Sioutas, Efrosini Sourla, Kostas Tsichlas, Gerasimos Vonitsanos, Christos Zaroliagis
      First page: 96
      Abstract: In this work, we propose D3-Tree, a dynamic distributed deterministic structure for data management in decentralized networks, by engineering and extending an existing decentralized structure. Conducting an extensive experimental study, we verify that the implemented structure outperforms other well-known hierarchical tree-based structures since it provides better complexities regarding load-balancing operations. More specifically, the structure achieves an O(logN) amortized bound (N is the number of nodes present in the network), using an efficient deterministic load-balancing mechanism, which is general enough to be applied to other hierarchical tree-based structures. Moreover, our structure achieves O(logN) worst-case search performance. Last but not least, we investigate the structure’s fault tolerance, which hasn’t been sufficiently tackled in previous work, both theoretically and through rigorous experimentation. We prove that D3-Tree is highly fault-tolerant and achieves O(logN) amortized search cost under massive node failures, accompanied by a significant success rate. Afterwards, by incorporating this novel balancing scheme into the ART (Autonomous Range Tree) structure, we go one step further to achieve sub-logarithmic complexity and propose the ART+ structure. ART+ achieves an O(logb2logN) communication cost for query and update operations (b is a double-exponentially power of 2 and N is the total number of nodes). Moreover, ART+ is a fully dynamic and fault-tolerant structure, which supports the join/leave node operations in O(loglogN) expected WHP (with high proability) number of hops and performs load-balancing in O(loglogN) amortized cost.
      Citation: Algorithms
      PubDate: 2022-03-18
      DOI: 10.3390/a15030096
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 97: Editorial for the Special Issue on
           “Machine Learning in Healthcare and Biomedical Application”

    • Authors: Alessia Sarica
      First page: 97
      Abstract: In the last decade, Machine Learning (ML) has indisputably had a pervasive application in healthcare and biomedical applications [...]
      Citation: Algorithms
      PubDate: 2022-03-19
      DOI: 10.3390/a15030097
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 98: Evolutionary Optimization of Spiking Neural
           P Systems for Remaining Useful Life Prediction

    • Authors: Leonardo Lucio Custode, Hyunho Mo, Andrea Ferigo, Giovanni Iacca
      First page: 98
      Abstract: Remaining useful life (RUL) prediction is a key enabler for predictive maintenance. In fact, the possibility of accurately and reliably predicting the RUL of a system, based on a record of its monitoring data, can allow users to schedule maintenance interventions before faults occur. In the recent literature, several data-driven methods for RUL prediction have been proposed. However, most of them are based on traditional (connectivist) neural networks, such as convolutional neural networks, and alternative mechanisms have barely been explored. Here, we tackle the RUL prediction problem for the first time by using a membrane computing paradigm, namely that of Spiking Neural P (in short, SN P) systems. First, we show how SN P systems can be adapted to handle the RUL prediction problem. Then, we propose the use of a neuro-evolutionary algorithm to optimize the structure and parameters of the SN P systems. Our results on two datasets, namely the CMAPSS and new CMAPSS benchmarks from NASA, are fairly comparable with those obtained by much more complex deep networks, showing a reasonable compromise between performance and number of trainable parameters, which in turn correlates with memory consumption and computing time.
      Citation: Algorithms
      PubDate: 2022-03-19
      DOI: 10.3390/a15030098
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 99: Design of Selective Laser Melting (SLM)
           Structures: Consideration of Different Material Properties in Multiple
           Surface Layers Resulting from the Manufacturing in a Topology Optimization

    • Authors: Jan Holoch, Sven Lenhardt, Sven Revfi, Albert Albers
      First page: 99
      Abstract: Topology optimization offers a possibility to derive load-compliant structures. These structures tend to be complex, and conventional manufacturing offers only limited possibilities for their production. Additive manufacturing provides a remedy due to its high design freedom. However, this type of manufacturing can cause areas of different material properties in the final part. For example, in selective laser melting, three areas of different porosity can occur depending on the process parameters, the geometry of the part and the print direction, resulting in a direct interrelation between manufacturing and design. In order to address this interrelation in design finding, this contribution presents an optimization method in which the three porous areas are identified and the associated material properties are considered iteratively in a topology optimization. For this purpose, the topology optimization is interrupted in each iteration. Afterwards, the three areas as well as the material properties are determined and transferred back to the topology optimization, whereby those properties are used for the calculation of the next iteration. By using the optimization method, a design with increased volume-specific stiffness compared to a design of a standard topology optimization can be created and will be used in the future as a basis for the extension by a global strength constraint to maintain the maximum permissible stress and the minimum wall thickness.
      Citation: Algorithms
      PubDate: 2022-03-19
      DOI: 10.3390/a15030099
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 100: Key Concepts, Weakness and Benchmark on
           Hash Table Data Structures

    • Authors: Santiago Tapia-Fernández, Daniel García-García, Pablo García-Hernandez
      First page: 100
      Abstract: Most computer programs or applications need fast data structures. The performance of a data structure is necessarily influenced by the complexity of its common operations; thus, any data-structure that exhibits a theoretical complexity of amortized constant time in several of its main operations should draw a lot of attention. Such is the case of a family of data structures that is called hash tables. However, what is the real efficiency of these hash tables' That is an interesting question with no simple answer and there are some issues to be considered. Of course, there is not a unique hash table; in fact, there are several sub-groups of hash tables, and, even more, not all programming languages use the same variety of hash tables in their default hash table implementation, neither they have the same interface. Nevertheless, all hash tables do have a common issue: they have to solve hash collisions; that is a potential weakness and it also induces a classification of hash tables according to the strategy to solve collisions. In this paper, some key concepts about hash tables are exposed and some definitions about those key concepts are reviewed and clarified, especially in order to study the characteristics of the main strategies to implement hash tables and how they deal with hash collisions. Then, some benchmark cases are designed and presented to assess the performance of hash tables. The cases have been designed to be randomized, to be self-tested, to be representative of a real user cases, and to expose and analyze the impact of different factors over the performance across different hash tables and programming languages. Then, all cases have been programmed using C++, Java and Python and analyzed in terms of interfaces and efficiency (time and memory). The benchmark yields important results about the performance of these structures and its (lack of) relationship with complexity analysis.
      Citation: Algorithms
      PubDate: 2022-03-21
      DOI: 10.3390/a15030100
      Issue No: Vol. 15, No. 3 (2022)
  • Algorithms, Vol. 15, Pages 101: Multi-Fidelity Sparse Polynomial Chaos and
           Kriging Surrogate Models Applied to Analytical Benchmark Problems

    • Authors: Markus P. Rumpfkeil, Dean Bryson, Phil Beran
      First page: 101
      Abstract: In this article, multi-fidelity kriging and sparse polynomial chaos expansion (SPCE) surrogate models are constructed. In addition, a novel combination of the two surrogate approaches into a multi-fidelity SPCE-Kriging model will be presented. Accurate surrogate models, once obtained, can be employed for evaluating a large number of designs for uncertainty quantification, optimization, or design space exploration. Analytical benchmark problems are used to show that accurate multi-fidelity surrogate models can be obtained at lower computational cost than high-fidelity models. The benchmarks include non-polynomial and polynomial functions of various input dimensions, lower dimensional heterogeneous non-polynomial functions, as well as a coupled spring-mass-system. Overall, multi-fidelity models are more accurate than high-fidelity ones for the same cost, especially when only a few high-fidelity training points are employed. Full-order PCEs tend to be a factor of two or so worse than SPCES in terms of overall accuracy. The combination of the two approaches into the SPCE-Kriging model leads to a more accurate and flexible method overall.
      Citation: Algorithms
      PubDate: 2022-03-21
      DOI: 10.3390/a15030101
      Issue No: Vol. 15, No. 3 (2022)
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762

Your IP address:
Home (Search)
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-