Subjects -> MATHEMATICS (Total: 1118 journals)
    - APPLIED MATHEMATICS (92 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (819 journals)
    - MATHEMATICS (GENERAL) (45 journals)
    - NUMERICAL ANALYSIS (26 journals)

MATHEMATICS (819 journals)                  1 2 3 4 5 | Last

Showing 1 - 200 of 538 Journals sorted alphabetically
Abakós     Open Access   (Followers: 5)
Abhandlungen aus dem Mathematischen Seminar der Universitat Hamburg     Hybrid Journal   (Followers: 3)
Accounting Perspectives     Full-text available via subscription   (Followers: 9)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 17)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 5)
ACM Transactions on Mathematical Software (TOMS)     Hybrid Journal   (Followers: 9)
ACS Applied Materials & Interfaces     Hybrid Journal   (Followers: 44)
Acta Applicandae Mathematicae     Hybrid Journal   (Followers: 2)
Acta Mathematica     Hybrid Journal   (Followers: 11)
Acta Mathematica Hungarica     Hybrid Journal   (Followers: 2)
Acta Mathematica Scientia     Full-text available via subscription   (Followers: 5)
Acta Mathematica Sinica, English Series     Hybrid Journal   (Followers: 6)
Acta Mathematica Vietnamica     Hybrid Journal  
Acta Mathematicae Applicatae Sinica, English Series     Hybrid Journal  
Advanced Science Letters     Full-text available via subscription   (Followers: 13)
Advances in Applied Clifford Algebras     Hybrid Journal   (Followers: 6)
Advances in Catalysis     Full-text available via subscription   (Followers: 8)
Advances in Complex Systems     Hybrid Journal   (Followers: 12)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 23)
Advances in Decision Sciences     Open Access   (Followers: 4)
Advances in Difference Equations     Open Access   (Followers: 5)
Advances in Fixed Point Theory     Open Access   (Followers: 9)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 22)
Advances in Linear Algebra & Matrix Theory     Open Access   (Followers: 10)
Advances in Materials Science     Open Access   (Followers: 22)
Advances in Mathematical Physics     Open Access   (Followers: 10)
Advances in Mathematics     Full-text available via subscription   (Followers: 22)
Advances in Numerical Analysis     Open Access   (Followers: 8)
Advances in Operations Research     Open Access   (Followers: 14)
Advances in Operator Theory     Hybrid Journal   (Followers: 4)
Advances in Porous Media     Full-text available via subscription   (Followers: 6)
Advances in Pure Mathematics     Open Access   (Followers: 11)
Advances in Science and Research (ASR)     Open Access   (Followers: 8)
Aequationes Mathematicae     Hybrid Journal   (Followers: 2)
African Journal of Educational Studies in Mathematics and Sciences     Full-text available via subscription   (Followers: 12)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 7)
Afrika Matematika     Hybrid Journal   (Followers: 3)
Air, Soil & Water Research     Open Access   (Followers: 13)
AKSIOMA Journal of Mathematics Education     Open Access   (Followers: 4)
AKSIOMATIK : Jurnal Penelitian Pendidikan dan Pembelajaran Matematika     Open Access   (Followers: 1)
Al-Jabar : Jurnal Pendidikan Matematika     Open Access   (Followers: 1)
Al-Qadisiyah Journal for Computer Science and Mathematics     Open Access   (Followers: 1)
AL-Rafidain Journal of Computer Sciences and Mathematics     Open Access   (Followers: 6)
Algebra and Logic     Hybrid Journal   (Followers: 8)
Algebra Colloquium     Hybrid Journal   (Followers: 4)
Algebra Universalis     Hybrid Journal   (Followers: 2)
Algorithmic Operations Research     Open Access   (Followers: 5)
Algorithms     Open Access   (Followers: 14)
Algorithms Research     Open Access   (Followers: 2)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 10)
American Journal of Mathematical Analysis     Open Access   (Followers: 2)
American Journal of Mathematical and Management Sciences     Hybrid Journal   (Followers: 1)
American Journal of Mathematics     Full-text available via subscription   (Followers: 9)
American Journal of Operations Research     Open Access   (Followers: 8)
American Mathematical Monthly     Full-text available via subscription   (Followers: 7)
An International Journal of Optimization and Control: Theories & Applications     Open Access   (Followers: 13)
Analele Universitatii Ovidius Constanta - Seria Matematica     Open Access  
Analysis and Applications     Hybrid Journal   (Followers: 2)
Analysis and Mathematical Physics     Hybrid Journal   (Followers: 10)
Analysis Mathematica     Full-text available via subscription  
Anargya : Jurnal Ilmiah Pendidikan Matematika     Open Access   (Followers: 8)
Annales Mathematicae Silesianae     Open Access   (Followers: 2)
Annales mathématiques du Québec     Hybrid Journal   (Followers: 4)
Annales Universitatis Mariae Curie-Sklodowska, sectio A – Mathematica     Open Access   (Followers: 1)
Annales Universitatis Paedagogicae Cracoviensis. Studia Mathematica     Open Access  
Annali di Matematica Pura ed Applicata     Hybrid Journal   (Followers: 1)
Annals of Combinatorics     Hybrid Journal   (Followers: 3)
Annals of Data Science     Hybrid Journal   (Followers: 17)
Annals of Discrete Mathematics     Full-text available via subscription   (Followers: 8)
Annals of Functional Analysis     Hybrid Journal   (Followers: 4)
Annals of Mathematics     Full-text available via subscription   (Followers: 4)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 16)
Annals of PDE     Hybrid Journal  
Annals of Pure and Applied Logic     Open Access   (Followers: 6)
Annals of the Alexandru Ioan Cuza University - Mathematics     Open Access  
Annals of the Institute of Statistical Mathematics     Hybrid Journal   (Followers: 1)
Annals of West University of Timisoara - Mathematics     Open Access   (Followers: 1)
Annals of West University of Timisoara - Mathematics and Computer Science     Open Access   (Followers: 2)
Annuaire du Collège de France     Open Access   (Followers: 6)
ANZIAM Journal     Open Access   (Followers: 2)
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 3)
Applications of Mathematics     Hybrid Journal   (Followers: 3)
Applied Categorical Structures     Hybrid Journal   (Followers: 4)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 16)
Applied Mathematics     Open Access   (Followers: 10)
Applied Mathematics     Open Access   (Followers: 6)
Applied Mathematics & Optimization     Hybrid Journal   (Followers: 13)
Applied Mathematics - A Journal of Chinese Universities     Hybrid Journal   (Followers: 2)
Applied Mathematics and Nonlinear Sciences     Open Access   (Followers: 1)
Applied Mathematics Letters     Full-text available via subscription   (Followers: 3)
Applied Mathematics Research eXpress     Hybrid Journal   (Followers: 2)
Applied Network Science     Open Access   (Followers: 3)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 6)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 6)
Arab Journal of Mathematical Sciences     Open Access   (Followers: 4)
Arabian Journal of Mathematics     Open Access   (Followers: 2)
Archive for Mathematical Logic     Hybrid Journal   (Followers: 4)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 6)
Archive of Numerical Software     Open Access  
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 6)
Arkiv för Matematik     Hybrid Journal   (Followers: 1)
Armenian Journal of Mathematics     Open Access   (Followers: 1)
Arnold Mathematical Journal     Hybrid Journal   (Followers: 1)
Artificial Satellites     Open Access   (Followers: 24)
Asia-Pacific Journal of Operational Research     Hybrid Journal   (Followers: 3)
Asian Journal of Algebra     Open Access   (Followers: 1)
Asian Research Journal of Mathematics     Open Access  
Asian-European Journal of Mathematics     Hybrid Journal   (Followers: 4)
Australian Mathematics Teacher, The     Full-text available via subscription   (Followers: 7)
Australian Primary Mathematics Classroom     Full-text available via subscription   (Followers: 7)
Australian Senior Mathematics Journal     Full-text available via subscription   (Followers: 2)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Axioms     Open Access   (Followers: 1)
Baltic International Yearbook of Cognition, Logic and Communication     Open Access   (Followers: 2)
Banach Journal of Mathematical Analysis     Hybrid Journal   (Followers: 1)
Basin Research     Hybrid Journal   (Followers: 6)
BIBECHANA     Open Access   (Followers: 2)
Biomath     Open Access  
BIT Numerical Mathematics     Hybrid Journal   (Followers: 1)
Boletim Cearense de Educação e História da Matemática     Open Access  
Boletim de Educação Matemática     Open Access  
Boletín de la Sociedad Matemática Mexicana     Hybrid Journal  
Bollettino dell'Unione Matematica Italiana     Full-text available via subscription   (Followers: 3)
British Journal for the History of Mathematics     Hybrid Journal  
British Journal of Mathematical and Statistical Psychology     Full-text available via subscription   (Followers: 20)
Bruno Pini Mathematical Analysis Seminar     Open Access  
Buletinul Academiei de Stiinte a Republicii Moldova. Matematica     Open Access   (Followers: 14)
Bulletin des Sciences Mathamatiques     Full-text available via subscription   (Followers: 4)
Bulletin of Dnipropetrovsk University. Series : Communications in Mathematical Modeling and Differential Equations Theory     Open Access   (Followers: 3)
Bulletin of Mathematical Sciences     Open Access   (Followers: 1)
Bulletin of Symbolic Logic     Full-text available via subscription   (Followers: 3)
Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics     Open Access  
Bulletin of the Australian Mathematical Society     Full-text available via subscription   (Followers: 2)
Bulletin of the Brazilian Mathematical Society, New Series     Hybrid Journal  
Bulletin of the Iranian Mathematical Society     Hybrid Journal  
Bulletin of the London Mathematical Society     Hybrid Journal   (Followers: 3)
Bulletin of the Malaysian Mathematical Sciences Society     Hybrid Journal  
Cadernos do IME : Série Matemática     Open Access   (Followers: 2)
Calculus of Variations and Partial Differential Equations     Hybrid Journal  
Canadian Journal of Mathematics / Journal canadien de mathématiques     Hybrid Journal  
Canadian Journal of Science, Mathematics and Technology Education     Hybrid Journal   (Followers: 23)
Canadian Mathematical Bulletin     Hybrid Journal  
Carpathian Mathematical Publications     Open Access   (Followers: 1)
Catalysis in Industry     Hybrid Journal   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 6)
CHANCE     Hybrid Journal   (Followers: 5)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 3)
Chaos, Solitons & Fractals : X     Open Access   (Followers: 1)
ChemSusChem     Hybrid Journal   (Followers: 8)
Chinese Annals of Mathematics, Series B     Hybrid Journal  
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 3)
Chinese Journal of Mathematics     Open Access  
Ciencia     Open Access   (Followers: 1)
CODEE Journal     Open Access   (Followers: 2)
Cogent Mathematics     Open Access   (Followers: 2)
Cognitive Computation     Hybrid Journal   (Followers: 3)
Collectanea Mathematica     Hybrid Journal  
College Mathematics Journal     Hybrid Journal   (Followers: 4)
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 4)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 17)
Commentarii Mathematici Helvetici     Hybrid Journal  
Communications in Advanced Mathematical Sciences     Open Access  
Communications in Combinatorics and Optimization     Open Access  
Communications in Contemporary Mathematics     Hybrid Journal  
Communications in Mathematical Physics     Hybrid Journal   (Followers: 4)
Communications On Pure & Applied Mathematics     Hybrid Journal   (Followers: 5)
Complex Analysis and its Synergies     Open Access   (Followers: 3)
Complex Variables and Elliptic Equations: An International Journal     Hybrid Journal  
Composite Materials Series     Full-text available via subscription   (Followers: 11)
Compositio Mathematica     Full-text available via subscription  
Comptes Rendus : Mathematique     Open Access  
Computational and Applied Mathematics     Hybrid Journal   (Followers: 4)
Computational and Mathematical Methods     Hybrid Journal  
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 3)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 1)
Computational Complexity     Hybrid Journal   (Followers: 4)
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 9)
Computational Mechanics     Hybrid Journal   (Followers: 10)
Computational Methods and Function Theory     Hybrid Journal  
Computational Optimization and Applications     Hybrid Journal   (Followers: 11)
Computers & Mathematics with Applications     Full-text available via subscription   (Followers: 11)
Confluentes Mathematici     Hybrid Journal  
Constructive Mathematical Analysis     Open Access   (Followers: 1)
Contributions to Discrete Mathematics     Open Access   (Followers: 1)
Contributions to Game Theory and Management     Open Access  
COSMOS     Hybrid Journal   (Followers: 1)
Cryptography and Communications     Hybrid Journal   (Followers: 14)
Cuadernos de Investigación y Formación en Educación Matemática     Open Access  
Cubo. A Mathematical Journal     Open Access  
Current Research in Biostatistics     Open Access   (Followers: 8)
Czechoslovak Mathematical Journal     Hybrid Journal   (Followers: 1)
Daya Matematis : Jurnal Inovasi Pendidikan Matematika     Open Access   (Followers: 1)
Demographic Research     Open Access   (Followers: 16)
Design Journal : An International Journal for All Aspects of Design     Hybrid Journal   (Followers: 35)
Desimal : Jurnal Matematika     Open Access   (Followers: 3)
Developments in Clay Science     Full-text available via subscription   (Followers: 1)
Developments in Mineral Processing     Full-text available via subscription   (Followers: 3)
Dhaka University Journal of Science     Open Access  
Differential Equations and Dynamical Systems     Hybrid Journal   (Followers: 4)

        1 2 3 4 5 | Last

Similar Journals
Journal Cover
Journal Prestige (SJR): 0.217
Citation Impact (citeScore): 1
Number of Followers: 14  

  This is an Open Access Journal Open Access journal
ISSN (Print) 1999-4893
Published by MDPI Homepage  [238 journals]
  • Algorithms, Vol. 14, Pages 253: The Power of Human–Algorithm
           Collaboration in Solving Combinatorial Optimization Problems

    • Authors: Tapani Toivonen, Markku Tukiainen
      First page: 253
      Abstract: Many combinatorial optimization problems are often considered intractable to solve exactly or by approximation. An example of such a problem is maximum clique, which—under standard assumptions in complexity theory—cannot be solved in sub-exponential time or be approximated within the polynomial factor efficiently. However, we show that if a polynomial time algorithm can query informative Gaussian priors from an expert poly(n) times, then a class of combinatorial optimization problems can be solved efficiently up to a multiplicative factor ϵ, where ϵ is arbitrary constant. In this paper, we present proof of our claims and show numerical results to support them. Our methods can cast new light on how to approach optimization problems in domains where even the approximation of the problem is not feasible. Furthermore, the results can help researchers to understand the structures of these problems (or whether these problems have any structure at all!). While the proposed methods can be used to approximate combinatorial problems in NPO, we note that the scope of the problems solvable might well include problems that are provable intractable (problems in EXPTIME).
      Citation: Algorithms
      PubDate: 2021-08-24
      DOI: 10.3390/a14090253
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 254: Prioritizing Construction Labor
           Productivity Improvement Strategies Using Fuzzy Multi-Criteria Decision
           Making and Fuzzy Cognitive Maps

    • Authors: Kazerooni, Nguyen, Fayek
      First page: 254
      Abstract: Construction labor productivity (CLP) is affected by various interconnected factors, such as crew motivation and working conditions. Improved CLP can benefit a construction project in many ways, such as a shortened project life cycle and lowering project cost. However, budget, time, and resource restrictions force companies to select and implement only a limited number of CLP improvement strategies. Therefore, a research gap exists regarding methods for supporting the selection of CLP improvement strategies for a given project by quantifying the impact of strategies on CLP with respect to interrelationships among CLP factors. This paper proposes a decision support model that integrates fuzzy multi-criteria decision making with fuzzy cognitive maps to prioritize CLP improvement strategies based on their impact on CLP, causal relationships among CLP factors, and project characteristics. The proposed model was applied to determine CLP improvement strategies for concrete-pouring activities in building projects as an illustrative example. This study contributes to the body of knowledge by providing a systematic approach for selecting appropriate CLP improvement strategies based on interrelationships among the factors affecting CLP and the impact of such strategies on CLP. The results are expected to support construction practitioners with identifying effective improvement strategies to enhance CLP in their projects.
      Citation: Algorithms
      PubDate: 2021-08-24
      DOI: 10.3390/a14090254
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 255: An Algebraic Approach to Identifiability

    • Authors: Daniel Gerbet, Klaus Röbenack
      First page: 255
      Abstract: This paper addresses the problem of identifiability of nonlinear polynomial state-space systems. Such systems have already been studied via the input-output equations, a description that, in general, requires differential algebra. The authors use a different algebraic approach, which is based on distinguishability and observability. Employing techniques from algebraic geometry such as polynomial ideals and Gröbner bases, local as well as global results are derived. The methods are illustrated on some example systems.
      Citation: Algorithms
      PubDate: 2021-08-27
      DOI: 10.3390/a14090255
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 256: Summarisation, Simulation and Comparison
           of Nine Control Algorithms for an Active Control Mount with an Oscillating
           Coil Actuator

    • Authors: Fan, Wang, Han, Wei, Liu, Yuan
      First page: 256
      Abstract: With the further development of the automotive industry, the traditional vibration isolation method is difficult to meet the requirements for wide frequency bands under multiple operating conditions, the active control mount (ACM) is gradually paid attentions, and the control algorithm plays a decisive role. In this paper, the ACM with oscillating coil actuator (OCA) is taken as the object, and the comparative study of the control algorithms is performed to select the optimal one for ACM. Through the modelling of ACM, the design of controller and the system simulations, the force transmission rate is used to compare the vibration isolation performance of the nine control algorithms, which are least mean square (LMS) adaptive feedforward control, recursive least square (RLS) adaptive feedforward control, filtered reference signal LMS (FxLMS) adaptive control, linear quadratic regulator (LQR) optimal control, H2 control, H∞ control, proportional integral derivative (PID) feedback control, fuzzy control and fuzzy PID control. In summary, the FxLMS adaptive control algorithm has the better performance and the advantage of easier hardware implementation, and it can apply in ACMs.
      Citation: Algorithms
      PubDate: 2021-08-27
      DOI: 10.3390/a14090256
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 257: Metal Surface Defect Detection Using
           Modified YOLO

    • Authors: Yiming Xu, Kai Zhang, Li Wang
      First page: 257
      Abstract: Aiming at the problems of inefficient detection caused by traditional manual inspection and unclear features in metal surface defect detection, an improved metal surface defect detection technology based on the You Only Look Once (YOLO) model is presented. The shallow features of the 11th layer in the Darknet-53 are combined with the deep features of the neural network to generate a new scale feature layer using the basis of the network structure of YOLOv3. Its goal is to extract more features of small defects. Furthermore, then, K-Means++ is used to reduce the sensitivity to the initial cluster center when analyzing the size information of the anchor box. The optimal anchor box is selected to make the positioning more accurate. The performance of the modified metal surface defect detection technology is compared with other detection methods on the Tianchi dataset. The results show that the average detection accuracy of the modified YOLO model is 75.1%, which ia higher than that of YOLOv3. Furthermore, it also has a great detection speed advantage, compared with faster region-based convolutional neural network (Faster R-CNN) and other detection algorithms. The improved YOLO model can make the highly accurate location information of the small defect target and has strong real-time performance.
      Citation: Algorithms
      PubDate: 2021-08-28
      DOI: 10.3390/a14090257
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 258: A Novel Semi-Supervised Fuzzy C-Means
           Clustering Algorithm Using Multiple Fuzzification Coefficients

    • Authors: Tran Dinh Khang, Manh-Kien Tran, Michael Fowler
      First page: 258
      Abstract: Clustering is an unsupervised machine learning method with many practical applications that has gathered extensive research interest. It is a technique of dividing data elements into clusters such that elements in the same cluster are similar. Clustering belongs to the group of unsupervised machine learning techniques, meaning that there is no information about the labels of the elements. However, when knowledge of data points is known in advance, it will be beneficial to use a semi-supervised algorithm. Within many clustering techniques available, fuzzy C-means clustering (FCM) is a common one. To make the FCM algorithm a semi-supervised method, it was proposed in the literature to use an auxiliary matrix to adjust the membership grade of the elements to force them into certain clusters during the computation. In this study, instead of using the auxiliary matrix, we proposed to use multiple fuzzification coefficients to implement the semi-supervision component. After deriving the proposed semi-supervised fuzzy C-means clustering algorithm with multiple fuzzification coefficients (sSMC-FCM), we demonstrated the convergence of the algorithm and validated the efficiency of the method through a numerical example.
      Citation: Algorithms
      PubDate: 2021-08-29
      DOI: 10.3390/a14090258
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 259: Solving the Two Echelon Vehicle Routing
           Problem Using Simulated Annealing Algorithm Considering Drop Box
           Facilities and Emission Cost: A Case Study of Reverse Logistics
           Application in Indonesia

    • Authors: Marco Reinaldi, Anak Agung Ngurah Perwira Redi, Dio Fawwaz Prakoso, Arrie Wicaksono Widodo, Mochammad Rizal Wibisono, Agus Supranartha, Rahmad Inca Liperda, Reny Nadlifatin, Yogi Tri Prasetyo, Sekar Sakti
      First page: 259
      Abstract: A two echelon distribution system is often used to solve logistics problems. This study considers a two-echelon distribution system in reverse logistics context with the use of drop box facility as an intermediary facility. An optimization model of integer linear programming is proposed, representing a two-echelon vehicle routing problem with a drop box facility (2EVRP-DF). The aim is to find the minimum total costs consisting of vehicle transportation costs and the costs to compensate customers who have to travel to access these intermediary facilities. The results are then compared to those of common practice in reverse logistics. In common practice, customers are assumed to go directly to the depot to drop their goods. In addition, this study analyzes the environmental impact by adding a component of carbon emissions emitted by the vehicles. A set of comprehensive computational experiments is conducted. The results indicate that the 2EVRP-DF model can provide optimal costs and lower carbon emissions than the common practice.
      Citation: Algorithms
      PubDate: 2021-08-30
      DOI: 10.3390/a14090259
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 260: Comparison of Profit-Based Multi-Objective
           Approaches for Feature Selection in Credit Scoring

    • Authors: Naomi Simumba, Suguru Okami, Akira Kodaka, Naohiko Kohtake
      First page: 260
      Abstract: Feature selection is crucial to the credit-scoring process, allowing for the removal of irrelevant variables with low predictive power. Conventional credit-scoring techniques treat this as a separate process wherein features are selected based on improving a single statistical measure, such as accuracy; however, recent research has focused on meaningful business parameters such as profit. More than one factor may be important to the selection process, making multi-objective optimization methods a necessity. However, the comparative performance of multi-objective methods has been known to vary depending on the test problem and specific implementation. This research employed a recent hybrid non-dominated sorting binary Grasshopper Optimization Algorithm and compared its performance on multi-objective feature selection for credit scoring to that of two popular benchmark algorithms in this space. Further comparison is made to determine the impact of changing the profit-maximizing base classifiers on algorithm performance. Experiments demonstrate that, of the base classifiers used, the neural network classifier improved the profit-based measure and minimized the mean number of features in the population the most. Additionally, the NSBGOA algorithm gave relatively smaller hypervolumes and increased computational time across all base classifiers, while giving the highest mean objective values for the solutions. It is clear that the base classifier has a significant impact on the results of multi-objective optimization. Therefore, careful consideration should be made of the base classifier to use in the scenarios.
      Citation: Algorithms
      PubDate: 2021-08-30
      DOI: 10.3390/a14090260
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 261: PFSegIris: Precise and Fast Segmentation
           Algorithm for Multi-Source Heterogeneous Iris

    • Authors: Dong, Liu, Zhu
      First page: 261
      Abstract: Current segmentation methods have limitations for multi-source heterogeneous iris segmentation since differences of acquisition devices and acquisition environment conditions lead to images of greatly varying quality from different iris datasets. Thus, different segmentation algorithms are generally applied to distinct datasets. Meanwhile, deep-learning-based iris segmentation models occupy more space and take a long time. Therefore, a lightweight, precise, and fast segmentation network model, PFSegIris, aimed at the multi-source heterogeneous iris is proposed by us. First, the iris feature extraction modules designed were used to fully extract heterogeneous iris feature information, reducing the number of parameters, computation, and the loss of information. Then, an efficient parallel attention mechanism was introduced only once between the encoder and the decoder to capture semantic information, suppress noise interference, and enhance the discriminability of iris region pixels. Finally, we added a skip connection from low-level features to catch more detailed information. Experiments on four near-infrared datasets and three visible datasets show that the segmentation precision is better than that of existing algorithms, and the number of parameters and storage space are only 1.86 M and 0.007 GB, respectively. The average prediction time is less than 0.10 s. The proposed algorithm can segment multi-source heterogeneous iris images more precisely and quicker than other algorithms.
      Citation: Algorithms
      PubDate: 2021-08-30
      DOI: 10.3390/a14090261
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 262: Parallel Hybrid Particle Swarm Algorithm
           for Workshop Scheduling Based on Spark

    • Authors: Tianhua Zheng, Jiabin Wang, Yuxiang Cai
      First page: 262
      Abstract: In hybrid mixed-flow workshop scheduling, there are problems such as mass production, mass manufacturing, mass assembly and mass synthesis of products. In order to solve these problems, combined with the Spark platform, a hybrid particle swarm algorithm that will be parallelized is proposed. Compared with the existing intelligent algorithms, the parallel hybrid particle swarm algorithm is more conducive to the realization of the global optimal solution. In the loader manufacturing workshop, the optimization goal is to minimize the maximum completion time and a parallelized hybrid particle swarm algorithm is used. The results show that in the case of relatively large batches, the parallel hybrid particle swarm algorithm can effectively obtain the scheduling plan and avoid falling into the local optimal solution. Compared with algorithm serialization, algorithm parallelization improves algorithm efficiency by 2–4 times. The larger the batches, the more obvious the algorithm parallelization improves computational efficiency.
      Citation: Algorithms
      PubDate: 2021-08-30
      DOI: 10.3390/a14090262
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 263: Sequential Recommendation through Graph
           Neural Networks and Transformer Encoder with Degree Encoding

    • Authors: Shuli Wang, Xuewen Li, Xiaomeng Kou, Jin Zhang, Shaojie Zheng, Jinlong Wang, Gong
      First page: 263
      Abstract: Predicting users’ next behavior through learning users’ preferences according to the users’ historical behaviors is known as sequential recommendation. In this task, learning sequence representation by modeling the pairwise relationship between items in the sequence to capture their long-range dependencies is crucial. In this paper, we propose a novel deep neural network named graph convolutional network transformer recommender (GCNTRec). GCNTRec is capable of learning effective item representation in a user’s historical behaviors sequence, which involves extracting the correlation between the target node and multi-layer neighbor nodes on the graphs constructed under the heterogeneous information networks in an end-to-end fashion through a graph convolutional network (GCN) with degree encoding, while the capturing long-range dependencies of items in a sequence through the transformer encoder model. Using this multi-dimensional vector representation, items related to the a user historical behavior sequence can be easily predicted. We empirically evaluated GCNTRec on multiple public datasets. The experimental results show that our approach can effectively predict subsequent relevant items and outperforms previous techniques.
      Citation: Algorithms
      PubDate: 2021-08-31
      DOI: 10.3390/a14090263
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 264: Fully Automatic Operation Algorithm of
           Urban Rail Train Based on RBFNN Position Output Constrained Robust
           Adaptive Control

    • Authors: Yang, Zhang, Jin
      First page: 264
      Abstract: High parking accuracy, comfort and stability, and fast response speed are important indicators to measure the control performance of a fully automatic operation system. In this paper, aiming at the problem of low accuracy of the fully automatic operation control of urban rail trains, a radial basis function neural network position output-constrained robust adaptive control algorithm based on train operation curve tracking is proposed. Firstly, on the basis of the mechanism of motion mechanics, the nonlinear dynamic model of train motion is established. Then, RBFNN is used to adaptively approximate and compensate for the additional resistance and unknown interference of the train model, and the basic resistance parameter adaptive mechanism is introduced to enhance the anti-interference ability and adaptability of the control system. Lastly, on the basis of the RBFNN position output-constrained robust adaptive control technology, the train can track the desired operation curve, thereby achieving the smooth operation between stations and accurate stopping. The simulation results show that the position output-constrained robust adaptive control algorithm based on RBFNN has good robustness and adaptability. In the case of system parameter uncertainty and external disturbance, the control system can ensure high-precision control and improve the ride comfort.
      Citation: Algorithms
      PubDate: 2021-09-09
      DOI: 10.3390/a14090264
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 265: QB4MobOLAP: A Vocabulary Extension for
           Mobility OLAP on the Semantic Web

    • Authors: Wisnubhadra, Baharin, Emran, Setyohadi
      First page: 265
      Abstract: The accessibility of devices that track the positions of moving objects has attracted many researchers in Mobility Online Analytical Processing (Mobility OLAP). Mobility OLAP makes use of trajectory data warehousing techniques, which typically include a path of moving objects at a particular point in time. The Semantic Web (SW) users have published a large number of moving object datasets that include spatial and non-spatial data. These data are available as open data and require advanced analysis to aid in decision making. However, current SW technologies support advanced analysis only for multidimensional data warehouses and Online Analytical Processing (OLAP) over static spatial and non-spatial SW data. The existing technology does not support the modeling of moving object facts, the creation of basic mobility analytical queries, or the definition of fundamental operators and functions for moving object types. This article introduces the QB4MobOLAP vocabulary, which enables the analysis of mobility data stored in RDF cubes. This article defines Mobility OLAP operators and SPARQL user-defined functions. As a result, QB4MobOLAP vocabulary and the Mobility OLAP operators are evaluated by applying them to a practical use case of transportation analysis involving 8826 triples consisting of approximately 7000 fact triples. Each triple contains nearly 1000 temporal data points (equivalent to 7 million records in conventional databases). The execution of six pertinent spatiotemporal analytics query samples results in a practical, simple model with expressive performance for the enabling of executive decisions on transportation analysis.
      Citation: Algorithms
      PubDate: 2021-09-13
      DOI: 10.3390/a14090265
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 266: Multi-Class Freeway Congestion and
           Emission Based on Robust Dynamic Multi-Objective Optimization

    • Authors: Chen, Feng, Guo
      First page: 266
      Abstract: In order to solve the problem of traffic congestion and emission optimization of urban multi-class expressways, a robust dynamic nondominated sorting multi-objective genetic algorithm DFCM-RDNSGA-III based on density fuzzy c-means clustering method is proposed in this paper. Considering the three performance indicators of travel time, ramp queue and traffic emissions, the ramp metering and variable speed limit control schemes of an expressway are optimized to improve the main road and ramp traffic congestion, therefore achieving energy conservation and emission reduction. In the VISSIM simulation environment, a multi-on-ramp and multi-off-ramp road network is built to verify the performance of the algorithm. The results show that, compared with the existing algorithm NSGA-III, the DFCM-RDNSGA-III algorithm proposed in this paper can provide better ramp metering and variable speed limit control schemes in the process of road network peak formation and dissipation. In addition, the traffic congestion of expressways can be improved and energy conservation as well as emission reduction can also be realized.
      Citation: Algorithms
      PubDate: 2021-09-13
      DOI: 10.3390/a14090266
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 267: A New Constructive Heuristic Driven by
           Machine Learning for the Traveling Salesman Problem

    • Authors: Umberto Junior Mele, Luca Maria Gambardella, Roberto Montemanni
      First page: 267
      Abstract: Recent systems applying Machine Learning (ML) to solve the Traveling Salesman Problem (TSP) exhibit issues when they try to scale up to real case scenarios with several hundred vertices. The use of Candidate Lists (CLs) has been brought up to cope with the issues. A CL is defined as a subset of all the edges linked to a given vertex such that it contains mainly edges that are believed to be found in the optimal tour. The initialization procedure that identifies a CL for each vertex in the TSP aids the solver by restricting the search space during solution creation. It results in a reduction of the computational burden as well, which is highly recommended when solving large TSPs. So far, ML was engaged to create CLs and values on the elements of these CLs by expressing ML preferences at solution insertion. Although promising, these systems do not restrict what the ML learns and does to create solutions, bringing with them some generalization issues. Therefore, motivated by exploratory and statistical studies of the CL behavior in multiple TSP solutions, in this work, we rethink the usage of ML by purposely employing this system just on a task that avoids well-known ML weaknesses, such as training in presence of frequent outliers and the detection of under-represented events. The task is to confirm inclusion in a solution just for edges that are most likely optimal. The CLs of the edge considered for inclusion are employed as an input of the neural network, and the ML is in charge of distinguishing when such edge is in the optimal solution from when it is not. The proposed approach enables a reasonable generalization and unveils an efficient balance between ML and optimization techniques. Our ML-Constructive heuristic is trained on small instances. Then, it is able to produce solutions—without losing quality—for large problems as well. We compare our method against classic constructive heuristics, showing that the new approach performs well for TSPLIB instances up to 1748 cities. Although ML-Constructive exhibits an expensive constant computation time due to training, we proved that the computational complexity in the worst-case scenario—for the solution construction after training—is O(n2logn2), n being the number of vertices in the TSP instance.
      Citation: Algorithms
      PubDate: 2021-09-14
      DOI: 10.3390/a14090267
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 268: UFaceNet: Research on Multi-Task Face
           Recognition Algorithm Based on CNN

    • Authors: Huoyou Li, Jianshiun Hu, Jingwen Yu, Ning Yu, Qingqiang Wu
      First page: 268
      Abstract: With the application of deep convolutional neural networks, the performance of computer vision tasks has been improved to a new level. The construction of a deeper and more complex network allows the face recognition algorithm to obtain a higher accuracy, However, the disadvantages of large computation and storage costs of neural networks limit the further popularization of the algorithm. To solve this problem, we have studied the unified and efficient neural network face recognition algorithm under the condition of a single camera; we propose that the complete face recognition process consists of four tasks: face detection, in vivo detection, keypoint detection, and face verification; combining the key algorithms of these four tasks, we propose a unified network model based on a deep separable convolutional structure—UFaceNet. The model uses multisource data to carry out multitask joint training and uses the keypoint detection results to aid the learning of other tasks. It further introduces the attention mechanism through feature level clipping and alignment to ensure the accuracy of the model, using the shared convolutional layer network among tasks to reduce model calculations amount and realize network acceleration. The learning goal of multi-tasking implicitly increases the amount of training data and different data distribution, making it easier to learn the characteristics with generalization. The experimental results show that the UFaceNet model is better than other models in terms of calculation amount and number of parameters with higher efficiency, and some potential areas to be used.
      Citation: Algorithms
      PubDate: 2021-09-15
      DOI: 10.3390/a14090268
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 269: Algorithms for Bidding Strategies in Local
           Energy Markets: Exhaustive Search through Parallel Computing and
           Metaheuristic Optimization

    • Authors: Andrés Angulo, Diego Rodríguez, Wilmer Garzón, Diego F. Gómez, Ameena Al Sumaiti, Sergio Rivera
      First page: 269
      Abstract: The integration of different energy resources from traditional power systems presents new challenges for real-time implementation and operation. In the last decade, a way has been sought to optimize the operation of small microgrids (SMGs) that have a great variety of energy sources (PV (photovoltaic) prosumers, Genset CHP (combined heat and power), etc.) with uncertainty in energy production that results in different market prices. For this reason, metaheuristic methods have been used to optimize the decision-making process for multiple players in local and external markets. Players in this network include nine agents: three consumers, three prosumers (consumers with PV capabilities), and three CHP generators. This article deploys metaheuristic algorithms with the objective of maximizing power market transactions and clearing price. Since metaheuristic optimization algorithms do not guarantee global optima, an exhaustive search is deployed to find global optima points. The exhaustive search algorithm is implemented using a parallel computing architecture to reach feasible results in a short amount of time. The global optimal result is used as an indicator to evaluate the performance of the different metaheuristic algorithms. The paper presents results, discussion, comparison, and recommendations regarding the proposed set of algorithms and performance tests.
      Citation: Algorithms
      PubDate: 2021-09-16
      DOI: 10.3390/a14090269
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 270: Use of the Codon Table to Quantify the
           Evolutionary Role of Random Mutations

    • Authors: Mihaly Mezei
      First page: 270
      Abstract: The various biases affecting RNA mutations during evolution is the subject of intense research, leaving the extent of the role of random mutations undefined. To remedy this lacuna, using the codon table, the number of codons representing each amino acid was correlated with the amino acid frequencies in different branches of the evolutionary tree. The correlations were seen to increase as evolution progressed. Furthermore, the number of RNA mutations that resulted in a given amino acid mutation were found to be correlated with several widely used amino acid similarity tables (used in sequence alignments). These correlations were seen to increase when the observed codon usage was factored in.
      Citation: Algorithms
      PubDate: 2021-09-17
      DOI: 10.3390/a14090270
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 271: Long-Term EEG Component Analysis Method
           Based on Lasso Regression

    • Authors: Bo, Li, Wu, Li, Ma
      First page: 271
      Abstract: At present, there are very few analysis methods for long-term electroencephalogram (EEG) components. Temporal information is always ignored by most of the existing techniques in cognitive studies. Therefore, a new analysis method based on time-varying characteristics was proposed. First of all, a regression model based on Lasso was proposed to reveal the difference between acoustics and physiology. Then, Permutation Tests and Gaussian fitting were applied to find the highest correlation. A cognitive experiment based on 93 emotional sounds was designed, and the EEG data of 10 volunteers were collected to verify the model. The 48-dimensional acoustic features and 428 EEG components were extracted and analyzed together. Through this method, the relationship between the EEG components and the acoustic features could be measured. Moreover, according to the temporal relations, an optimal offset of acoustic features was found, which could obtain better alignment with EEG features. After the regression analysis, the significant EEG components were found, which were in good agreement with cognitive laws. This provides a new idea for long-term EEG components, which could be applied in other correlative subjects.
      Citation: Algorithms
      PubDate: 2021-09-17
      DOI: 10.3390/a14090271
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 272: How Neurons in Deep Models Relate with
           Neurons in the Brain

    • Authors: Arianna Pavone, Alessio Plebe
      First page: 272
      Abstract: In dealing with the algorithmic aspects of intelligent systems, the analogy with the biological brain has always been attractive, and has often had a dual function. On the one hand, it has been an effective source of inspiration for their design, while, on the other hand, it has been used as the justification for their success, especially in the case of Deep Learning (DL) models. However, in recent years, inspiration from the brain has lost its grip on its first role, yet it continues to be proposed in its second role, although we believe it is also becoming less and less defensible. Outside the chorus, there are theoretical proposals that instead identify important demarcation lines between DL and human cognition, to the point of being even incommensurable. In this article we argue that, paradoxically, the partial indifference of the developers of deep neural models to the functioning of biological neurons is one of the reasons for their success, having promoted a pragmatically opportunistic attitude. We believe that it is even possible to glimpse a biological analogy of a different kind, in that the essentially heuristic way of proceeding in modern DL development bears intriguing similarities to natural evolution.
      Citation: Algorithms
      PubDate: 2021-09-17
      DOI: 10.3390/a14090272
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 273: Intelligent Search of Values for a
           Controller Using the Artificial Bee Colony Algorithm to Control the
           Velocity of Displacement of a Robot

    • Authors: José M. Villegas, Camilo Caraveo, David A. Mejía, José L. Rodríguez, Yuridia Vega, Leticia Cervantes, Alejandro Medina-Santiago
      First page: 273
      Abstract: The optimization is essential in the engineering area and, in conjunction with use of meta-heuristics, has had a great impact in recent years; this is because of its great precision in search of optimal parameters for the solution of problems. In this work, the use of the Artificial Bee Colony Algorithm (ABC) is presented to optimize the values for the variables of a proportional integral controller (PI) to observe the behavior of the controller with the optimized Ti and Kp values. It is proposed using a robot built using the MINDSTORMS version EV3 kit. The objective of this work is to demonstrate the improvement and efficiency of the controllers in conjunction with optimization meta-heuristics. In the results section, we observe that the results improve considerably compared to traditional methods. In this work, the main contribution is the implementation of an optimization algorithm (ABC) applied to a controller (PI), and the results are tested to control the movement of a robot. There are many papers where the kit is used in various domains such as education as well as research for science and technology tasks and some real-world problems by engineering scholars, showing the acceptable result.
      Citation: Algorithms
      PubDate: 2021-09-18
      DOI: 10.3390/a14090273
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 274: Dynamical Recovery of Complex Networks
           under a Localised Attack

    • Authors: Fan Wang, Gaogao Dong, Lixin Tian
      First page: 274
      Abstract: In real systems, some damaged nodes can spontaneously become active again when recovered from themselves or their active neighbours. However, the spontaneous dynamical recovery of complex networks that suffer a local failure has not yet been taken into consideration. To model this recovery process, we develop a framework to study the resilience behaviours of the network under a localised attack (LA). Since the nodes’ state within the network affects the subsequent dynamic evolution, we study the dynamic behaviours of local failure propagation and node recoveries based on this memory characteristic. It can be found that the fraction of active nodes switches back and forth between high network activity and low network activity, which leads to the spontaneous emergence of phase-flipping phenomena. These behaviours can be found in a random regular network, Erdős-Rényi network and Scale-free network, which shows that these three types of networks have the same or different resilience behaviours under an LA and random attack. These results will be helpful for studying the spontaneous recovery real systems under an LA. Our work provides insight into understanding the recovery process and a protection strategy of various complex systems from the perspective of damaged memory.
      Citation: Algorithms
      PubDate: 2021-09-21
      DOI: 10.3390/a14090274
      Issue No: Vol. 14, No. 9 (2021)
  • Algorithms, Vol. 14, Pages 218: IOb-Cache: A High-Performance Configurable
           Open-Source Cache

    • Authors: João V. Roque, João D. Lopes, Mário P. Véstias, José T. de Sousa
      First page: 218
      Abstract: Open-source processors are increasingly being adopted by the industry, which requires all sorts of open-source implementations of peripherals and other system-on-chip modules. Despite the recent advent of open-source hardware, the available open-source caches have low configurability, limited lack of support for single-cycle pipelined memory accesses, and use non-standard hardware interfaces. In this paper, the IObundle cache (IOb-Cache), a high-performance configurable open-source cache is proposed, developed and deployed. The cache has front-end and back-end modules for fast integration with processors and memory controllers. The front-end module supports the native interface, and the back-end module supports the native interface and the standard Advanced eXtensible Interface (AXI). The cache is highly configurable in structure and access policies. The back-end can be configured to read bursts of multiple words per transfer to take advantage of the available memory bandwidth. To the best of our knowledge, IOb-Cache is currently the only configurable cache that supports pipelined Central Processing Unit (CPU) interfaces and AXI memory bus interface. Additionally, it has a write-through buffer and an independent controller for fast, most of the time 1-cycle writing together with 1-cycle reading, while previous works only support 1-cycle reading. This allows the best clocks-per-Instruction (CPI) to be close to one (1.055). IOb-Cache is integrated into IOb System-on-Chip (IOb-SoC) Github repository, which has 29 stars and is already being used in 50 projects (forks).
      Citation: Algorithms
      PubDate: 2021-07-21
      DOI: 10.3390/a14080218
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 219: Allocating Students to Industry Placements
           Using Integer Programming and Ant Colony Optimisation

    • Authors: Dhananjay Thiruvady, Kerri Morgan, Susan Bedingfield, Asef Nazari
      First page: 219
      Abstract: The increasing demand for work-ready students has heightened the need for universities to provide work integrated learning programs to enhance and reinforce students’ learning experiences. Students benefit most when placements meet their academic requirements and graduate aspirations. Businesses and community partners are more engaged when they are allocated students that meet their industry requirements. In this paper, both an integer programming model and an ant colony optimisation heuristic are proposed, with the aim of automating the allocation of students to industry placements. The emphasis is on maximising student engagement and industry partner satisfaction. As part of the objectives, these methods incorporate diversity in industry sectors for students undertaking multiple placements, gender equity across placement providers, and the provision for partners to rank student selections. The experimental analysis is in two parts: (a) we investigate how the integer programming model performs against manual allocations and (b) the scalability of the IP model is examined. The results show that the IP model easily outperforms the previous manual allocations. Additionally, an artificial dataset is generated which has similar properties to the original data but also includes greater numbers of students and placements to test the scalability of the algorithms. The results show that integer programming is the best option for problem instances consisting of less than 3000 students. When the problem becomes larger, significantly increasing the time required for an IP solution, ant colony optimisation provides a useful alternative as it is always able to find good feasible solutions within short time-frames.
      Citation: Algorithms
      PubDate: 2021-07-21
      DOI: 10.3390/a14080219
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 220: Time-Dependent Alternative Route Planning:
           Theory and Practice

    • Authors: Spyros Kontogiannis, Andreas Paraskevopoulos, Christos Zaroliagis
      First page: 220
      Abstract: We consider the problem of computing a set of meaningful alternative origin-to-destination routes, in real-world road network instances whose arcs are accompanied by travel-time functions rather than fixed costs. In this time-dependent alternative route scenario, we present a novel query algorithm, called Time-Dependent Alternative Graph (TDAG), that exploits the outcome of a time-consuming preprocessing phase to create a manageable amount of travel-time metadata, in order to provide answers for arbitrary alternative-routes queries, in only a few milliseconds for continental-size instances. The resulting set of alternative routes is aggregated in the form of a time-dependent alternative graph, which is characterized by the minimum route overlap, small stretch factor, small size, and low complexity. To our knowledge, this is the first work that deals with the time-dependent setting in the framework of alternative routes. The preprocessed metadata prescribe the minimum travel-time informations between a small set of “landmark” nodes and all other nodes in the graph. The TDAG query algorithm carries out the work in two distinct phases: initially, a collection phase constructs candidate alternative routes; consequently, a pruning phase cautiously discards uninteresting or low-quality routes from the candidate set. Our experimental evaluation on real-world, time-dependent road networks demonstrates that TDAG performed much better (by one or two orders of magnitude) than the existing baseline approaches.
      Citation: Algorithms
      PubDate: 2021-07-21
      DOI: 10.3390/a14080220
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 221: Interactive Graph Stream Analytics in

    • Authors: Zhihui Du, Oliver Alvarado Rodriguez, Joseph Patchett, David A. Bader
      First page: 221
      Abstract: Data from emerging applications, such as cybersecurity and social networking, can be abstracted as graphs whose edges are updated sequentially in the form of a stream. The challenging problem of interactive graph stream analytics is the quick response of the queries on terabyte and beyond graph stream data from end users. In this paper, a succinct and efficient double index data structure is designed to build the sketch of a graph stream to meet general queries. A single pass stream model, which includes general sketch building, distributed sketch based analysis algorithms and regression based approximation solution generation, is developed, and a typical graph algorithm—triangle counting—is implemented to evaluate the proposed method. Experimental results on power law and normal distribution graph streams show that our method can generate accurate results (mean relative error less than 4%) with a high performance. All our methods and code have been implemented in an open source framework, Arkouda, and are available from our GitHub repository, Bader-Research. This work provides the large and rapidly growing Python community with a powerful way to handle terabyte and beyond graph stream data using their laptops.
      Citation: Algorithms
      PubDate: 2021-07-21
      DOI: 10.3390/a14080221
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 222: Constrained Eigenvalue Minimization of
           Incomplete Pairwise Comparison Matrices by Nelder-Mead Algorithm

    • Authors: Hailemariam Abebe Tekile, Michele Fedrizzi, Matteo Brunelli
      First page: 222
      Abstract: Pairwise comparison matrices play a prominent role in multiple-criteria decision-making, particularly in the analytic hierarchy process (AHP). Another form of preference modeling, called an incomplete pairwise comparison matrix, is considered when one or more elements are missing. In this paper, an algorithm is proposed for the optimal completion of an incomplete matrix. Our intention is to numerically minimize a maximum eigenvalue function, which is difficult to write explicitly in terms of variables, subject to interval constraints. Numerical simulations are carried out in order to examine the performance of the algorithm. The results of our simulations show that the proposed algorithm has the ability to solve the minimization of the constrained eigenvalue problem. We provided illustrative examples to show the simplex procedures obtained by the proposed algorithm, and how well it fills in the given incomplete matrices.
      Citation: Algorithms
      PubDate: 2021-07-23
      DOI: 10.3390/a14080222
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 223: Fixed Point Results on Multi-Valued
           Generalized (α,β)-Nonexpansive Mappings in Banach Spaces

    • Authors: Kifayat Ullah, Muhammad Safi Ullah Khan, Manuel de la Sen
      First page: 223
      Abstract: In this paper, we provide and study the concept of multi-valued generalized (α,β)-nonexpansive mappings, which is the multi-valued version of the recently developed generalized (α,β)-nonexpansive mappings. We establish some elementary properties and fixed point existence results for these mappings. Moreover, a multi-valued version of the M-iterative scheme is proposed for approximating fixed points of these mappings in the weak and strong senses. Using an example, we also show that M-iterative scheme converges faster as compared to many other schemes for this class of mappings.
      Citation: Algorithms
      PubDate: 2021-07-25
      DOI: 10.3390/a14080223
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 224: Intelligent Network Intrusion Prevention
           Feature Collection and Classification Algorithms

    • Authors: Deepaa Selva, Balakrishnan Nagaraj, Danil Pelusi, Rajendran Arunkumar, Ajay Nair
      First page: 224
      Abstract: Rapid Internet use growth and applications of diverse military have managed researchers to develop smart systems to help applications and users achieve the facilities through the provision of required service quality in networks. Any smart technologies offer protection in interactions in dispersed locations such as, e-commerce, mobile networking, telecommunications and management of network. Furthermore, this article proposed on intelligent feature selection methods and intrusion detection (ISTID) organization in webs based on neuron-genetic algorithms, intelligent software agents, genetic algorithms, particulate swarm intelligence and neural networks, rough-set. These techniques were useful to identify and prevent network intrusion to provide Internet safety and improve service value and accuracy, performance and efficiency. Furthermore, new algorithms of intelligent rules-based attributes collection algorithm for efficient function and rules-based improved vector support computer, were proposed in this article, along with a survey into the current smart techniques for intrusion detection systems.
      Citation: Algorithms
      PubDate: 2021-07-26
      DOI: 10.3390/a14080224
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 225: Similar Supergraph Search Based on Graph
           Edit Distance

    • Authors: Masataka Yamada, Akihiro Inokuchi
      First page: 225
      Abstract: Subgraph and supergraph search methods are promising techniques for the development of new drugs. For example, the chemical structure of favipiravir—an antiviral treatment for influenza—resembles the structure of some components of RNA. Represented as graphs, such compounds are similar to a subgraph of favipiravir. However, the existing supergraph search methods can only discover compounds that match exactly. We propose a novel problem, called similar supergraph search, and design an efficient algorithm to solve it. The problem is to identify all graphs in a database that are similar to any subgraph of a query graph, where similarity is defined as edit distance. Our algorithm represents the set of candidate subgraphs by a code tree, which it uses to efficiently compute edit distance. With a distance threshold of zero, our algorithm is equivalent to an existing efficient algorithm for exact supergraph search. Our experiments show that the computation time increased exponentially as the distance threshold increased, but increased sublinearly with the number of graphs in the database.
      Citation: Algorithms
      PubDate: 2021-07-27
      DOI: 10.3390/a14080225
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 226: Synthetic Experiences for Accelerating DQN
           Performance in Discrete Non-Deterministic Environments

    • Authors: Wenzel Pilar von Pilchau, Anthony Stein, Jörg Hähner
      First page: 226
      Abstract: State-of-the-art Deep Reinforcement Learning Algorithms such as DQN and DDPG use the concept of a replay buffer called Experience Replay. The default usage contains only the experiences that have been gathered over the runtime. We propose a method called Interpolated Experience Replay that uses stored (real) transitions to create synthetic ones to assist the learner. In this first approach to this field, we limit ourselves to discrete and non-deterministic environments and use a simple equally weighted average of the reward in combination with observed follow-up states. We could demonstrate a significantly improved overall mean average in comparison to a DQN network with vanilla Experience Replay on the discrete and non-deterministic FrozenLake8x8-v0 environment.
      Citation: Algorithms
      PubDate: 2021-07-27
      DOI: 10.3390/a14080226
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 227: A Modified Liu and Storey Conjugate
           Gradient Method for Large Scale Unconstrained Optimization Problems

    • Authors: Zabidin Salleh, Ghaliah Alhamzi, Ibitsam Masmali, Ahmad Alhawarat
      First page: 227
      Abstract: The conjugate gradient method is one of the most popular methods to solve large-scale unconstrained optimization problems since it does not require the second derivative, such as Newton’s method or approximations. Moreover, the conjugate gradient method can be applied in many fields such as neural networks, image restoration, etc. Many complicated methods are proposed to solve these optimization functions in two or three terms. In this paper, we propose a simple, easy, efficient, and robust conjugate gradient method. The new method is constructed based on the Liu and Storey method to overcome the convergence problem and descent property. The new modified method satisfies the convergence properties and the sufficient descent condition under some assumptions. The numerical results show that the new method outperforms famous CG methods such as CG-Descent 5.3, Liu and Storey, and Dai and Liao. The numerical results include the number of iterations and CPU time.
      Citation: Algorithms
      PubDate: 2021-07-28
      DOI: 10.3390/a14080227
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 228: Image Representation Using Stacked Colour

    • Authors: Ezekiel Mensah Martey, Hang Lei, Xiaoyu Li, Obed Appiah
      First page: 228
      Abstract: Image representation plays a vital role in the realisation of Content-Based Image Retrieval (CBIR) system. The representation is performed because pixel-by-pixel matching for image retrieval is impracticable as a result of the rigid nature of such an approach. In CBIR therefore, colour, shape and texture and other visual features are used to represent images for effective retrieval task. Among these visual features, the colour and texture are pretty remarkable in defining the content of the image. However, combining these features does not necessarily guarantee better retrieval accuracy due to image transformations such rotation, scaling, and translation that an image would have gone through. More so, concerns about feature vector representation taking ample memory space affect the running time of the retrieval task. To address these problems, we propose a new colour scheme called Stack Colour Histogram (SCH) which inherently extracts colour and neighbourhood information into a descriptor for indexing images. SCH performs recurrent mean filtering of the image to be indexed. The recurrent blurring in this proposed method works by repeatedly filtering (transforming) the image. The output of a transformation serves as the input for the next transformation, and in each case a histogram is generated. The histograms are summed up bin-by-bin and the resulted vector used to index the image. The image blurring process uses pixel’s neighbourhood information, making the proposed SCH exhibit the inherent textural information of the image that has been indexed. The SCH was extensively tested on the Coil100, Outext, Batik and Corel10K datasets. The Coil100, Outext, and Batik datasets are generally used to assess image texture descriptors, while Corel10K is used for heterogeneous descriptors. The experimental results show that our proposed descriptor significantly improves retrieval and classification rate when compared with (CMTH, MTH, TCM, CTM and NRFUCTM) which are the start-of-the-art descriptors for images with textural features.
      Citation: Algorithms
      PubDate: 2021-07-30
      DOI: 10.3390/a14080228
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 229: An Efficient Time-Variant Reliability
           Analysis Method with Mixed Uncertainties

    • Authors: Fangyi Li, Yufei Yan, Jianhua Rong, Houyao Zhu
      First page: 229
      Abstract: In practical engineering, due to the lack of information, it is impossible to accurately determine the distribution of all variables. Therefore, time-variant reliability problems with both random and interval variables may be encountered. However, this kind of problem usually involves a complex multilevel nested optimization problem, which leads to a substantial computational burden, and it is difficult to meet the requirements of complex engineering problem analysis. This study proposes a decoupling strategy to efficiently analyze the time-variant reliability based on the mixed uncertainty model. The interval variables are treated with independent random variables that are uniformly distributed in their respective intervals. Then the time-variant reliability-equivalent model, containing only random variables, is established, to avoid multi-layer nesting optimization. The stochastic process is first discretized to obtain several static limit state functions at different times. The time-variant reliability problem is changed into the conventional time-invariant system reliability problem. First order reliability analysis method (FORM) is used to analyze the reliability of each time. Thus, an efficient and robust convergence hybrid time-variant reliability calculation algorithm is proposed based on the equivalent model. Finally, numerical examples shows the effectiveness of the proposed method.
      Citation: Algorithms
      PubDate: 2021-07-31
      DOI: 10.3390/a14080229
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 230: Behavior Selection Metaheuristic Search
           Algorithm for the Pollination Optimization: A Simulation Case of Cocoa

    • Authors: Willa Ariela Syafruddin, Rio Mukhtarom Paweroi, Mario Köppen
      First page: 230
      Abstract: Since nature is an excellent source of inspiration for optimization methods, many optimization algorithms have been proposed, are inspired by nature, and are modified to solve various optimization problems. This paper uses metaheuristics in a new field inspired by nature; more precisely, we use pollination optimization in cocoa plants. The cocoa plant was chosen as the object since its flower type differs from other kinds of flowers, for example, by using cross-pollination. This complex relationship between plants and pollinators also renders pollination a real-world problem for chocolate production. Therefore, this study first identified the underlying optimization problem as a deferred fitness problem, where the quality of a potential solution cannot be immediately determined. Then, the study investigates how metaheuristic algorithms derived from three well-known techniques perform when applied to the flower pollination problem. The three techniques examined here are Swarm Intelligence Algorithms, Individual Random Search, and Multi-Agent Systems search. We then compare the behavior of these various search methods based on the results of pollination simulations. The criteria are the number of pollinated flowers for the trees and the amount and fairness of nectar pickup for the pollinator. Our results show that Multi-Agent System performs notably better than other methods. The result of this study are insights into the co-evolution of behaviors for the collaborative pollination task. We also foresee that this investigation can also help farmers increase chocolate production by developing methods to attract and promote pollinators.
      Citation: Algorithms
      PubDate: 2021-07-31
      DOI: 10.3390/a14080230
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 231: Improved Duplication-Transfer-Loss
           Reconciliation with Extinct and Unsampled Lineages

    • Authors: Samson Weiner, Mukul S. Bansal
      First page: 231
      Abstract: Duplication-Transfer-Loss (DTL) reconciliation is a widely used computational technique for understanding gene family evolution and inferring horizontal gene transfer (transfer for short) in microbes. However, most existing models and implementations of DTL reconciliation cannot account for the effect of unsampled or extinct species lineages on the evolution of gene families, likely affecting their accuracy. Accounting for the presence and possible impact of any unsampled species lineages, including those that are extinct, is especially important for inferring and studying horizontal transfer since many genes in the species lineages represented in the reconciliation analysis are likely to have been acquired through horizontal transfer from unsampled lineages. While models of DTL reconciliation that account for transfer from unsampled lineages have already been proposed, they use a relatively simple framework for transfer from unsampled lineages and cannot explicitly infer the location on the species tree of each unsampled or extinct lineage associated with an identified transfer event. Furthermore, there does not yet exist any systematic studies to assess the impact of accounting for unsampled lineages on the accuracy of DTL reconciliation. In this work, we address these deficiencies by (i) introducing an extended DTL reconciliation model, called the DTLx reconciliation model, that accounts for unsampled and extinct species lineages in a new, more functional manner compared to existing models, (ii) showing that optimal reconciliations under the new DTLx reconciliation model can be computed just as efficiently as under the fastest DTL reconciliation model, (iii) providing an efficient algorithm for sampling optimal DTLx reconciliations uniformly at random, (iv) performing the first systematic simulation study to assess the impact of accounting for unsampled lineages on the accuracy of DTL reconciliation, and (v) comparing the accuracies of inferring transfers from unsampled lineages under our new model and the only other previously proposed parsimony-based model for this problem.
      Citation: Algorithms
      PubDate: 2021-08-05
      DOI: 10.3390/a14080231
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 232: A General Cooperative Optimization
           Approach for Distributing Service Points in Mobility Applications

    • Authors: Thomas Jatschka, Günther R. Raidl, Tobias Rodemann
      First page: 232
      Abstract: This article presents a cooperative optimization approach (COA) for distributing service points for mobility applications, which generalizes and refines a previously proposed method. COA is an iterative framework for optimizing service point locations, combining an optimization component with user interaction on a large scale and a machine learning component that learns user needs and provides the objective function for the optimization. The previously proposed COA was designed for mobility applications in which single service points are sufficient for satisfying individual user demand. This framework is generalized here for applications in which the satisfaction of demand relies on the existence of two or more suitably located service stations, such as in the case of bike/car sharing systems. A new matrix factorization model is used as surrogate objective function for the optimization, allowing us to learn and exploit similar preferences among users w.r.t. service point locations. Based on this surrogate objective function, a mixed integer linear program is solved to generate an optimized solution to the problem w.r.t. the currently known user information. User interaction, refinement of the matrix factorization, and optimization are iterated. An experimental evaluation analyzes the performance of COA with special consideration of the number of user interactions required to find near optimal solutions. The algorithm is tested on artificial instances, as well as instances derived from real-world taxi data from Manhattan. Results show that the approach can effectively solve instances with hundreds of potential service point locations and thousands of users, while keeping the user interactions reasonably low. A bound on the number of user interactions required to obtain full knowledge of user preferences is derived, and results show that with 50% of performed user interactions the solutions generated by COA feature optimality gaps of only 1.45% on average.
      Citation: Algorithms
      PubDate: 2021-08-06
      DOI: 10.3390/a14080232
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 233: Special Issue on Algorithms and Models for
           Dynamic Multiple Criteria Decision Making

    • Authors: Debora Di Caprio, Francisco Javier Santos Arteaga
      First page: 233
      Abstract: The current Special Issue contains six papers focused on Multiple Criteria Deci-sion Making (MCDM) problems and the formal techniques applied to derive consistent rankings of them [...]
      Citation: Algorithms
      PubDate: 2021-08-08
      DOI: 10.3390/a14080233
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 234: Maritime Supply Chain Optimization by
           Using Fuzzy Goal Programming

    • Authors: Bekir Sahin, Devran Yazir, Abdelsalam Adam Hamid, Noorul Shaiful Fitri Abdul Rahman
      First page: 234
      Abstract: Fuzzy goal programming has important applications in many areas of supply chain, logistics, transportation and shipping business. Business management has complications, and there exist many interactions between the factors of its components. The locomotive of world trade is maritime transport and approximately 90% of the products in the world are transported by sea. Optimization of maritime operations is a challenge in order to provide technical, operational and financial benefits. Fuzzy goal programming models attract interests of many scholars, therefore the objective of this paper is to investigate the problem of minimization of total cost and minimization of loss or damage of containers returned from destination port. There are various types of fuzzy goal programming problems based on models and solution methods. This paper employs fuzzy goal programming with triangular fuzzy numbers, membership functions, constraints, assumptions as well as the variables and parameters for optimizing the solution of the model problem. The proposed model presents the mathematical algorithm, and reveals the optimal solution according to satisfaction rank from 0 to 1. Providing a theoretical background, this study offers novel ideas to researchers, decision makers and authorities.
      Citation: Algorithms
      PubDate: 2021-08-09
      DOI: 10.3390/a14080234
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 235: Computational Complexity and ILP Models
           for Pattern Problems in the Logical Analysis of Data

    • Authors: Giuseppe Lancia, Paolo Serafini
      First page: 235
      Abstract: Logical Analysis of Data is a procedure aimed at identifying relevant features in data sets with both positive and negative samples. The goal is to build Boolean formulas, represented by strings over {0,1,-} called patterns, which can be used to classify new samples as positive or negative. Since a data set can be explained in alternative ways, many computational problems arise related to the choice of a particular set of patterns. In this paper we study the computational complexity of several of these pattern problems (showing that they are, in general, computationally hard) and we propose some integer programming models that appear to be effective. We describe an ILP model for finding the minimum-size set of patterns explaining a given set of samples and another one for the problem of determining whether two sets of patterns are equivalent, i.e., they explain exactly the same samples. We base our first model on a polynomial procedure that computes all patterns compatible with a given set of samples. Computational experiments substantiate the effectiveness of our models on fairly large instances. Finally, we conjecture that the existence of an effective ILP model for finding a minimum-size set of patterns equivalent to a given set of patterns is unlikely, due to the problem being NP-hard and co-NP-hard at the same time.
      Citation: Algorithms
      PubDate: 2021-08-09
      DOI: 10.3390/a14080235
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 236: SR-Inpaint: A General Deep Learning
           Framework for High Resolution Image Inpainting

    • Authors: Haoran Xu, Xinya Li, Kaiyi Zhang, Yanbai He, Haoran Fan, Sijiang Liu, Chuanyan Hao, Bo Jiang
      First page: 236
      Abstract: Recently, deep learning has enabled a huge leap forward in image inpainting. However, due to the memory and computational limitation, most existing methods are able to handle only low-resolution inputs, typically less than 1 K. With the improvement of Internet transmission capacity and mobile device cameras, the resolution of image and video sources available to users via the cloud or locally is increasing. For high-resolution images, the common inpainting methods simply upsample the inpainted result of the shrinked image to yield a blurry result. In recent years, there is an urgent need to reconstruct the missing high-frequency information in high-resolution images and generate sharp texture details. Hence, we propose a general deep learning framework for high-resolution image inpainting, which first hallucinates a semantically continuous blurred result using low-resolution inpainting and suppresses computational overhead. Then the sharp high-frequency details with original resolution are reconstructed using super-resolution refinement. Experimentally, our method achieves inspiring inpainting quality on 2K and 4K resolution images, ahead of the state-of-the-art high-resolution inpainting technique. This framework is expected to be popularized for high-resolution image editing tasks on personal computers and mobile devices in the future.
      Citation: Algorithms
      PubDate: 2021-08-10
      DOI: 10.3390/a14080236
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 237: Detect Overlapping Community Based on the
           Combination of Local Expansion and Label Propagation

    • Authors: Xu Li, Qiming Sun
      First page: 237
      Abstract: It is a common phenomenon in real life that individuals have diverse member relationships in different social clusters, which is called overlap in the science of network. Detecting overlapping components of the community structure in a network has extensive value in real-life applications. The mainstream algorithms for community detection generally focus on optimization of a global or local static metric. These algorithms are often not good when the community characteristics are diverse. In addition, there is a lot of randomness in the process of the algorithm. We proposed a algorithm combining local expansion and label propagation. In the stage of local expansion, the seed is determined by the node pair with the largest closeness, and the rule of expansion also depends on closeness. Local expansion is just to obtain the center of expected communities instead of final communities, and these immature communities leave only dense regions after pruning according to certain rules. Taking the dense regions as the source makes the label propagation reach stability rapidly in the early propagation so that the final communities are detected more accurately. The experiments in synthetic and real-world networks proved that our algorithm is more effective not only on the whole, but also at the level of the node. In addition, it is stable in the face of different network structures and can maintain high accuracy.
      Citation: Algorithms
      PubDate: 2021-08-11
      DOI: 10.3390/a14080237
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 238: Efficient Construction of the Equation

    • Authors: Faissal Ouardi, Zineb Lotfi, Bilal Elghadyry
      First page: 238
      Abstract: This paper describes a fast algorithm for constructing directly the equation automaton from the well-known Thompson automaton associated with a regular expression. Allauzen and Mohri have presented a unified construction of small automata and gave a construction of the equation automaton with time and space complexity in O(mlogm+m2), where m denotes the number of Thompson automaton transitions. It is based on two classical automata operations, namely epsilon-removal and Hopcroft’s algorithm for deterministic Finite Automata (DFA) minimization. Using the notion of c-continuation, Ziadi et al. presented a fast computation of the equation automaton in O(m2) time complexity. In this paper, we design an output-sensitive algorithm combining advantages of the previous algorithms and show that its computational complexity can be reduced to O(m× Q≡e ), where Q≡e denotes the number of states of the equation automaton, by an epsilon-removal and Bubenzer minimization algorithm of an Acyclic Deterministic Finite Automata (ADFA).
      Citation: Algorithms
      PubDate: 2021-08-11
      DOI: 10.3390/a14080238
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 239: Adaptive Self-Scaling Brain-Storm
           Optimization via a Chaotic Search Mechanism

    • Authors: Zhenyu Song, Xuemei Yan, Lvxing Zhao, Luyi Fan, Cheng Tang, Junkai Ji
      First page: 239
      Abstract: Brain-storm optimization (BSO), which is a population-based optimization algorithm, exhibits a poor search performance, premature convergence, and a high probability of falling into local optima. To address these problems, we developed the adaptive mechanism-based BSO (ABSO) algorithm based on the chaotic local search in this study. The adjustment of the search space using the local search method based on an adaptive self-scaling mechanism balances the global search and local development performance of the ABSO algorithm, effectively preventing the algorithm from falling into local optima and improving its convergence accuracy. To verify the stability and effectiveness of the proposed ABSO algorithm, the performance was tested using 29 benchmark test functions, and the mean and standard deviation were compared with those of five other optimization algorithms. The results showed that ABSO outperforms the other algorithms in terms of stability and convergence accuracy. In addition, the performance of ABSO was further verified through a nonparametric statistical test.
      Citation: Algorithms
      PubDate: 2021-08-13
      DOI: 10.3390/a14080239
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 240: Adaptive Supply Chain: Demand–Supply
           Synchronization Using Deep Reinforcement Learning

    • Authors: Zhandos Kegenbekov, Ilya Jackson
      First page: 240
      Abstract: Adaptive and highly synchronized supply chains can avoid a cascading rise-and-fall inventory dynamic and mitigate ripple effects caused by operational failures. This paper aims to demonstrate how a deep reinforcement learning agent based on the proximal policy optimization algorithm can synchronize inbound and outbound flows and support business continuity operating in the stochastic and nonstationary environment if end-to-end visibility is provided. The deep reinforcement learning agent is built upon the Proximal Policy Optimization algorithm, which does not require hardcoded action space and exhaustive hyperparameter tuning. These features, complimented with a straightforward supply chain environment, give rise to a general and task unspecific approach to adaptive control in multi-echelon supply chains. The proposed approach is compared with the base-stock policy, a well-known method in classic operations research and inventory control theory. The base-stock policy is prevalent in continuous-review inventory systems. The paper concludes with the statement that the proposed solution can perform adaptive control in complex supply chains. The paper also postulates fully fledged supply chain digital twins as a necessary infrastructural condition for scalable real-world applications.
      Citation: Algorithms
      PubDate: 2021-08-15
      DOI: 10.3390/a14080240
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 241: Property-Based Semantic Similarity
           Criteria to Evaluate the Overlaps of Schemas

    • Authors: Lan Huang, Yuanwei Zhao, Bo Wang, Dongxu Zhang, Rui Zhang, Subhashis Das, Simone Bocca, Fausto Giunchiglia
      First page: 241
      Abstract: Knowledge graph-based data integration is a practical methodology for heterogeneous legacy database-integrated service construction. However, it is neither efficient nor economical to build a new cross-domain knowledge graph on top of the schemas of each legacy database for the specific integration application rather than reusing the existing high-quality knowledge graphs. Consequently, a question arises as to whether the existing knowledge graph is compatible with cross-domain queries and with heterogenous schemas of the legacy systems. An effective criterion is urgently needed in order to evaluate such compatibility as it limits the quality upbound of the integration. This research studies the semantic similarity of the schemas from the aspect of properties. It provides a set of in-depth criteria, namely coverage and flexibility, to evaluate the pairwise compatibility between the schemas. It takes advantage of the properties of knowledge graphs to evaluate the overlaps between schemas and defines the weights of entity types in order to perform precise compatibility computation. The effectiveness of the criteria obtained to evaluate the compatibility between knowledge graphs and cross-domain queries is demonstrated using a case study.
      Citation: Algorithms
      PubDate: 2021-08-17
      DOI: 10.3390/a14080241
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 242: Data Mining Algorithms for Smart Cities: A
           Bibliometric Analysis

    • Authors: Anestis Kousis, Christos Tjortjis
      First page: 242
      Abstract: Smart cities connect people and places using innovative technologies such as Data Mining (DM), Machine Learning (ML), big data, and the Internet of Things (IoT). This paper presents a bibliometric analysis to provide a comprehensive overview of studies associated with DM technologies used in smart cities applications. The study aims to identify the main DM techniques used in the context of smart cities and how the research field of DM for smart cities evolves over time. We adopted both qualitative and quantitative methods to explore the topic. We used the Scopus database to find relative articles published in scientific journals. This study covers 197 articles published over the period from 2013 to 2021. For the bibliometric analysis, we used the Biliometrix library, developed in R. Our findings show that there is a wide range of DM technologies used in every layer of a smart city project. Several ML algorithms, supervised or unsupervised, are adopted for operating the instrumentation, middleware, and application layer. The bibliometric analysis shows that DM for smart cities is a fast-growing scientific field. Scientists from all over the world show a great interest in researching and collaborating on this interdisciplinary scientific field.
      Citation: Algorithms
      PubDate: 2021-08-17
      DOI: 10.3390/a14080242
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 243: Tourism Demand Forecasting Based on an
           LSTM Network and Its Variants

    • Authors: Shun-Chieh Hsieh
      First page: 243
      Abstract: The need for accurate tourism demand forecasting is widely recognized. The unreliability of traditional methods makes tourism demand forecasting still challenging. Using deep learning approaches, this study aims to adapt Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Gated Recurrent Unit networks (GRU), which are straightforward and efficient, to improve Taiwan’s tourism demand forecasting. The networks are able to seize the dependence of visitor arrival time series data. The Adam optimization algorithm with adaptive learning rate is used to optimize the basic setup of the models. The results show that the proposed models outperform previous studies undertaken during the Severe Acute Respiratory Syndrome (SARS) events of 2002–2003. This article also examines the effects of the current COVID-19 outbreak to tourist arrivals to Taiwan. The results show that the use of the LSTM network and its variants can perform satisfactorily for tourism demand forecasting.
      Citation: Algorithms
      PubDate: 2021-08-18
      DOI: 10.3390/a14080243
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 244: An Efficient Geometric Search Algorithm of
           Pandemic Boundary Detection

    • Authors: Zhanhao Zhang, Qifan Huang
      First page: 244
      Abstract: We consider a scenario where the pandemic infection rate is inversely proportional to the power of the distance between the infected region and the non-infected region. In our study, we analyze the case where the exponent of the distance is 2, which is in accordance with Reilly’s law of retail gravitation. One can test for infection but such tests are costly so one seeks to determine the region of infection while performing few tests. Our goal is to find a boundary region of minimal size that contains all infected areas. We discuss efficient algorithms and provide the asymptotic bound of the testing cost and simulation results for this problem.
      Citation: Algorithms
      PubDate: 2021-08-18
      DOI: 10.3390/a14080244
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 245: SVSL: A Human Activity Recognition Method
           Using Soft-Voting and Self-Learning

    • Authors: Aiiad Albeshri
      First page: 245
      Abstract: Many smart city and society applications such as smart health (elderly care, medical applications), smart surveillance, sports, and robotics require the recognition of user activities, an important class of problems known as human activity recognition (HAR). Several issues have hindered progress in HAR research, particularly due to the emergence of fog and edge computing, which brings many new opportunities (a low latency, dynamic and real-time decision making, etc.) but comes with its challenges. This paper focuses on addressing two important research gaps in HAR research: (i) improving the HAR prediction accuracy and (ii) managing the frequent changes in the environment and data related to user activities. To address this, we propose an HAR method based on Soft-Voting and Self-Learning (SVSL). SVSL uses two strategies. First, to enhance accuracy, it combines the capabilities of Deep Learning (DL), Generalized Linear Model (GLM), Random Forest (RF), and AdaBoost classifiers using soft-voting. Second, to classify the most challenging data instances, the SVSL method is equipped with a self-training mechanism that generates training data and retrains itself. We investigate the performance of our proposed SVSL method using two publicly available datasets on six human activities related to lying, sitting, and walking positions. The first dataset consists of 562 features and the second dataset consists of five features. The data are collected using the accelerometer and gyroscope smartphone sensors. The results show that the proposed method provides 6.26%, 1.75%, 1.51%, and 4.40% better prediction accuracy (average over the two datasets) compared to GLM, DL, RF, and AdaBoost, respectively. We also analyze and compare the class-wise performance of the SVSL methods with that of DL, GLM, RF, and AdaBoost.
      Citation: Algorithms
      PubDate: 2021-08-19
      DOI: 10.3390/a14080245
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 246: Scheduling Multiprocessor Tasks with Equal
           Processing Times as a Mixed Graph Coloring Problem

    • Authors: Sotskov, Mihova
      First page: 246
      Abstract: This article extends the scheduling problem with dedicated processors, unit-time tasks, and minimizing maximal lateness for integer due dates to the scheduling problem, where along with precedence constraints given on the set of the multiprocessor tasks, a subset of tasks must be processed simultaneously. Contrary to a classical shop-scheduling problem, several processors must fulfill a multiprocessor task. Furthermore, two types of the precedence constraints may be given on the task set . We prove that the extended scheduling problem with integer release times of the jobs to minimize schedule length may be solved as an optimal mixed graph coloring problem that consists of the assignment of a minimal number of colors (positive integers) to the vertices of the mixed graph such that, if two vertices and are joined by the edge , their colors have to be different. Further, if two vertices and are joined by the arc , the color of vertex has to be no greater than the color of vertex . We prove two theorems, which imply that most analytical results proved so far for optimal colorings of the mixed graphs , have analogous results, which are valid for the extended scheduling problems to minimize the schedule length or maximal lateness, and vice versa.
      Citation: Algorithms
      PubDate: 2021-08-19
      DOI: 10.3390/a14080246
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 247: Numerical Algorithm for Dynamic Impedance
           of Bridge Pile-Group Foundation and Its Validation

    • Authors: Chenyu Wang, Hong Qiao, Yi Wang, Xianting Du
      First page: 247
      Abstract: The characteristics of bridge pile-group foundation have a significant influence on the dynamic performance of the superstructure. Most of the existing analysis methods for the pile-group foundation impedance take the trait of strong specialty, which cannot be generalized in practical projects. Therefore, a project-oriented numerical solution algorithm is proposed to compute the dynamic impedance of bridge pile-group foundation. Based on the theory of viscous-spring artificial boundary, the derivation and solution of the impedance function are transferred to numerical modeling and harmonic analysis, which can be carried out through the finite element method. By taking a typical pile-group foundation as a case study, the results based on the algorithm are compared with those from existing literature. Moreover, an impact experiment of a real pile-group foundation was implemented, the results of which are also compared with those resulting from the proposed numerical algorithm. Both comparisons show that the proposed numerical algorithm satisfies engineering precision, thus showing good effectiveness in application.
      Citation: Algorithms
      PubDate: 2021-08-20
      DOI: 10.3390/a14080247
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 248: Experimental Validation of a Guaranteed
           Nonlinear Model Predictive Control

    • Authors: Mohamed Fnadi, Julien Alexandre dit Sandretto
      First page: 248
      Abstract: This paper combines the interval analysis tools with the nonlinear model predictive control (NMPC). The NMPC strategy is formulated based on an uncertain dynamic model expressed as nonlinear ordinary differential equations (ODEs). All the dynamic parameters are identified in a guaranteed way considering the various uncertainties on the embedded sensors and the system’s design. The NMPC problem is solved at each time step using validated simulation and interval analysis methods to compute the optimal and safe control inputs over a finite prediction horizon. This approach considers several constraints which are crucial for the system’s safety and stability, namely the state and the control limits. The proposed controller consists of two steps: filtering and branching procedures enabling to find the input intervals that fulfill the state constraints and ensure the convergence to the reference set. Then, the optimization procedure allows for computing the optimal and punctual control input that must be sent to the system’s actuators for the pendulum stabilization. The validated NMPC capabilities are illustrated through several simulations under the DynIbex library and experiments using an inverted pendulum.
      Citation: Algorithms
      PubDate: 2021-08-20
      DOI: 10.3390/a14080248
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 249: Myocardial Infarction Quantification from
           Late Gadolinium Enhancement MRI Using Top-Hat Transforms and Neural

    • Authors: Ezequiel de la de la Rosa, Désiré Sidibé, Thomas Decourselle, Thibault Leclercq, Alexandre Cochet, Alain Lalande
      First page: 249
      Abstract: Late gadolinium enhancement (LGE) MRI is the gold standard technique for myocardial viability assessment. Although the technique accurately reflects the damaged tissue, there is no clinical standard to quantify myocardial infarction (MI). Moreover, commercial software used in clinical practice are mostly semi-automatic, and hence require direct intervention of experts. In this work, a new automatic method for MI quantification from LGE-MRI is proposed. Our novel segmentation approach is devised for accurately detecting not only hyper-enhanced lesions, but also microvascular obstruction areas. Moreover, it includes a myocardial disease detection step which extends the algorithm for working under healthy scans. The method is based on a cascade approach where firstly, diseased slices are identified by a convolutional neural network (CNN). Secondly, by means of morphological operations a fast coarse scar segmentation is obtained. Thirdly, the segmentation is refined by a boundary-voxel reclassification strategy using an ensemble of very light CNNs. We tested the method on a LGE-MRI database with healthy (n = 20) and diseased (n = 80) cases following a 5-fold cross-validation scheme. Our approach segmented myocardial scars with an average Dice coefficient of 77.22 ± 14.3% and with a volumetric error of 1.0 ± 6.9 cm3. In a comparison against nine reference algorithms, the proposed method achieved the highest agreement in volumetric scar quantification with the expert delineations (p< 0.001 when compared to the other approaches). Moreover, it was able to reproduce the scar segmentation intra- and inter-rater variability. Our approach was shown to be a good first attempt towards automatic and accurate myocardial scar segmentation, although validation over larger LGE-MRI databases is needed.
      Citation: Algorithms
      PubDate: 2021-08-20
      DOI: 10.3390/a14080249
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 250: A Real-Time Network Traffic Classifier for
           Online Applications Using Machine Learning

    • Authors: Ahmed Abdelmoamen Ahmed, Gbenga Agunsoye
      First page: 250
      Abstract: The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported over secure application-layer protocols (e.g., HTTPS, SSL, and SSH). This makes it a challenging task for network administrators to identify online applications using traditional port-based approaches. One way for classifying the modern network traffic is to use machine learning (ML) to distinguish between the different traffic attributes such as packet count and size, packet inter-arrival time, packet send–receive ratio, etc. This paper presents the design and implementation of NetScrapper, a flow-based network traffic classifier for online applications. NetScrapper uses three ML models, namely K-Nearest Neighbors (KNN), Random Forest (RF), and Artificial Neural Network (ANN), for classifying the most popular 53 online applications, including Amazon, Youtube, Google, Twitter, and many others. We collected a network traffic dataset containing 3,577,296 packet flows with different 87 features for training, validating, and testing the ML models. A web-based user-friendly interface is developed to enable users to either upload a snapshot of their network traffic to NetScrapper or sniff the network traffic directly from the network interface card in real time. Additionally, we created a middleware pipeline for interfacing the three models with the Flask GUI. Finally, we evaluated NetScrapper using various performance metrics such as classification accuracy and prediction time. Most notably, we found that our ANN model achieves an overall classification accuracy of 99.86% in recognizing the online applications in our dataset.
      Citation: Algorithms
      PubDate: 2021-08-21
      DOI: 10.3390/a14080250
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 251: Comparative Analysis of Recurrent Neural
           Networks in Stock Price Prediction for Different Frequency Domains

    • Authors: Polash Dey, Emam Hossain, Md. Ishtiaque Hossain, Mohammed Armanuzzaman Chowdhury, Md. Shariful Alam, Mohammad Shahadat Hossain, Karl Andersson
      First page: 251
      Abstract: Investors in the stock market have always been in search of novel and unique techniques so that they can successfully predict stock price movement and make a big profit. However, investors continue to look for improved and new techniques to beat the market instead of old and traditional ones. Therefore, researchers are continuously working to build novel techniques to supply the demand of investors. Different types of recurrent neural networks (RNN) are used in time series analyses, especially in stock price prediction. However, since not all stocks’ prices follow the same trend, a single model cannot be used to predict the movement of all types of stock’s price. Therefore, in this research we conducted a comparative analysis of three commonly used RNNs—simple RNN, Long Short Term Memory (LSTM), and Gated Recurrent Unit (GRU)—and analyzed their efficiency for stocks having different stock trends and various price ranges and for different time frequencies. We considered three companies’ datasets from 30 June 2000 to 21 July 2020. The stocks follow different trends of price movements, with price ranges of $30, $50, and $290 during this period. We also analyzed the performance for one-day, three-day, and five-day time intervals. We compared the performance of RNN, LSTM, and GRU in terms of R2 value, MAE, MAPE, and RMSE metrics. The results show that simple RNN is outperformed by LSTM and GRU because RNN is susceptible to vanishing gradient problems, while the other two models are not. Moreover, GRU produces lesser errors comparing to LSTM. It is also evident from the results that as the time intervals get smaller, the models produce lower errors and higher reliability.
      Citation: Algorithms
      PubDate: 2021-08-22
      DOI: 10.3390/a14080251
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 252: Constrained Dynamic Mean-Variance
           Portfolio Selection in Continuous-Time

    • Authors: Weiping Wu, Lifen Wu, Ruobing Xue, Shan Pang
      First page: 252
      Abstract: This paper revisits the dynamic MV portfolio selection problem with cone constraints in continuous-time. We first reformulate our constrained MV portfolio selection model into a special constrained LQ optimal control model and develop the optimal portfolio policy of our model. In addition, we provide an alternative method to resolve this dynamic MV portfolio selection problem with cone constraints. More specifically, instead of solving the correspondent HJB equation directly, we develop the optimal solution for this problem by using the special properties of value function induced from its model structure, such as the monotonicity and convexity of value function. Finally, we provide an example to illustrate how to use our solution in real application. The illustrative example demonstrates that our dynamic MV portfolio policy dominates the static MV portfolio policy.
      Citation: Algorithms
      PubDate: 2021-08-23
      DOI: 10.3390/a14080252
      Issue No: Vol. 14, No. 8 (2021)
  • Algorithms, Vol. 14, Pages 197: An Optimal and Stable Algorithm for
           Clustering Numerical Data

    • Authors: Ali Seman, Azizian Mohd Sapawi
      First page: 197
      Abstract: In the conventional k-means framework, seeding is the first step toward optimization before the objects are clustered. In random seeding, two main issues arise: the clustering results may be less than optimal and different clustering results may be obtained for every run. In real-world applications, optimal and stable clustering is highly desirable. This report introduces a new clustering algorithm called the zero k-approximate modal haplotype (Zk-AMH) algorithm that uses a simple and novel seeding mechanism known as zero-point multidimensional spaces. The Zk-AMH provides cluster optimality and stability, therefore resolving the aforementioned issues. Notably, the Zk-AMH algorithm yielded identical mean scores to maximum, and minimum scores in 100 runs, producing zero standard deviation to show its stability. Additionally, when the Zk-AMH algorithm was applied to eight datasets, it achieved the highest mean scores for four datasets, produced an approximately equal score for one dataset, and yielded marginally lower scores for the other three datasets. With its optimality and stability, the Zk-AMH algorithm could be a suitable alternative for developing future clustering tools.
      Citation: Algorithms
      PubDate: 2021-06-29
      DOI: 10.3390/a14070197
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 198: Decimal Multiplication in FPGA with a
           Novel Decimal Adder/Subtractor

    • Authors: Mário P. Véstias, Horácio C. Neto
      First page: 198
      Abstract: Financial and commercial data are mostly represented in decimal format. To avoid errors introduced when converting some decimal fractions to binary, these data are processed with decimal arithmetic. Most processors only have hardwired binary arithmetic units. So, decimal operations are executed with slow software-based decimal arithmetic functions. For the fast execution of decimal operations, dedicated hardware units have been proposed and designed in FPGA. Decimal multiplication is found in most decimal-based applications and so its optimized design is very important for fast execution. In this paper two new parallel decimal multipliers in FPGA are proposed. These are based on a new decimal adder/subtractor also proposed in this paper. The new decimal multipliers improve state-of-the-art parallel decimal multipliers. Compared to previous architectures, implementation results show that the proposed multipliers achieve 26% better area and 12% better performance. Also, the new decimal multipliers reduce the area and performance gap to binary multipliers and are smaller for 32 digit operands.
      Citation: Algorithms
      PubDate: 2021-06-29
      DOI: 10.3390/a14070198
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 199: CARA: A Congestion-Aware Routing Algorithm
           for Wireless Sensor Networks

    • Authors: Jiangyu Yan, Bing Qi
      First page: 199
      Abstract: Congestion control is one of the key research topics in relation to the routing algorithms of wireless sensor networks (WSNs). In this paper, we propose a congestion-aware routing algorithm (CARA) for unlimited-lifetime wireless sensor networks by integrating the geographic distance and traffic load of sensor nodes. The algorithm takes alleviating congestion as the primary purpose and considers the traffic of the node itself and local network traffic. According to the geographic distance between nodes, CARA defines four decision parameters (node load factor, forward rate, cache remaining rate, and forward average cache remaining rate), selecting the best node as the next-hop through the multi-attribute decision-making method. Compared with the two existing algorithms for congestion control, our simulation results suggest that the CARA algorithm alleviates network congestion and meets reasonable network delay and energy consumption requirements.
      Citation: Algorithms
      PubDate: 2021-06-30
      DOI: 10.3390/a14070199
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 200: An Enhanced Discrete Symbiotic Organism
           Search Algorithm for Optimal Task Scheduling in the Cloud

    • Authors: Suleiman Sa’ad, Abdullah Muhammed, Mohammed Abdullahi, Azizol Abdullah, Fahrul Hakim Ayob
      First page: 200
      Abstract: Recently, cloud computing has begun to experience tremendous growth because government agencies and private organisations are migrating to the cloud environment. Hence, having a task scheduling strategy that is efficient is paramount for effectively improving the prospects of cloud computing. Typically, a certain number of tasks are scheduled to use diverse resources (virtual machines) to minimise the makespan and achieve the optimum utilisation of the system by reducing the response time within the cloud environment. The task scheduling problem is NP-complete; as such, obtaining a precise solution is difficult, particularly for large-scale tasks. Therefore, in this paper, we propose a metaheuristic enhanced discrete symbiotic organism search (eDSOS) algorithm for optimal task scheduling in the cloud computing setting. Our proposed algorithm is an extension of the standard symbiotic organism search (SOS), a nature-inspired algorithm that has been implemented to solve various numerical optimisation problems. This algorithm imitates the symbiotic associations (mutualism, commensalism, and parasitism stages) displayed by organisms in an ecosystem. Despite the improvements made with the discrete symbiotic organism search (DSOS) algorithm, it still becomes trapped in local optima due to the large size of the values of the makespan and response time. The local search space of the DSOS is diversified by substituting the best value with any candidate in the population at the mutualism phase of the DSOS algorithm, which makes it worthy for use in task scheduling problems in the cloud. Thus, the eDSOS strategy converges faster when the search space is larger or more prominent due to diversification. The CloudSim simulator was used to conduct the experiment, and the simulation results show that the proposed eDSOS was able to produce a solution with a good quality when compared with that of the DSOS. Lastly, we analysed the proposed strategy by using a two-sample t-test, which revealed that the performance of eDSOS was of significance compared to the benchmark strategy (DSOS), particularly for large search spaces. The percentage improvements were 26.23% for the makespan and 63.34% for the response time.
      Citation: Algorithms
      PubDate: 2021-06-30
      DOI: 10.3390/a14070200
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 201: COVID-19 Prediction Applying Supervised
           Machine Learning Algorithms with Comparative Analysis Using WEKA

    • Authors: Charlyn Nayve Villavicencio, Julio Jerison Escudero Macrohon, Xavier Alphonse Inbaraj, Jyh-Horng Jeng, Jer-Guang Hsieh
      First page: 201
      Abstract: Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as possible. With the use of technology, available information concerning COVID-19 increases each day, and extracting useful information from massive data can be done through data mining. In this study, authors utilized several supervised machine learning algorithms in building a model to analyze and predict the presence of COVID-19 using the COVID-19 Symptoms and Presence dataset from Kaggle. J48 Decision Tree, Random Forest, Support Vector Machine, K-Nearest Neighbors and Naïve Bayes algorithms were applied through WEKA machine learning software. Each model’s performance was evaluated using 10-fold cross validation and compared according to major accuracy measures, correctly or incorrectly classified instances, kappa, mean absolute error, and time taken to build the model. The results show that Support Vector Machine using Pearson VII universal kernel outweighs other algorithms by attaining 98.81% accuracy and a mean absolute error of 0.012.
      Citation: Algorithms
      PubDate: 2021-06-30
      DOI: 10.3390/a14070201
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 202: A Simplification Method for Point Cloud of
           T-Profile Steel Plate for Shipbuilding

    • Authors: Yanfeng Gao, Cicao Ping, Ling Wang, Binrui Wang
      First page: 202
      Abstract: According to the requirements of point cloud simplification for T-profile steel plate welding in shipbuilding, the disadvantages of the existing simplification algorithms are analyzed. In this paper, a point cloud simplification method is proposed based on octree coding and the threshold of the surface curvature feature. In this method, the original point cloud data are divided into multiple sub-cubes with specified side lengths by octree coding, and the points that are closest to the gravity center of the sub-cube are kept. The k-neighborhood method and the curvature calculation are performed in order to obtain the curvature features of the point cloud. Additionally, the point cloud data are divided into several regions based on the given adjustable curvature threshold. Finally, combining the random sampling method with the simplification method based on the regional gravity center, the T-profile point cloud data can be simplified. In this study, after obtaining the point cloud data of a T-profile plate, the proposed simplification method is compared with some other simplification methods. It is found that the proposed simplification method for the point cloud of the T-profile steel plate for shipbuilding is faster than the three existing simplification methods, while retaining more feature points and having approximately the same reduction rates.
      Citation: Algorithms
      PubDate: 2021-06-30
      DOI: 10.3390/a14070202
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 203: Non-Traditional Layout Design for Robotic
           Mobile Fulfillment System with Multiple Workstations

    • Authors: Xiuqing Yang, Xinglu Liu, Lijuan Feng, Jianquan Zhang, Mingyao Qi
      First page: 203
      Abstract: This paper studies the layout design of a robotic mobile fulfillment system with multiple workstations. This is a parts-to-picker storage system where robots hoist pods and bring them directly to the workstations for stationary pickers to retrieve required items. As few research efforts have focused on determining the optimal locations of workstations in such systems, we develop an integer programming model to determine the location of workstations to minimize the total traveling distance of robots. In addition, we investigate the near-optimal workstation location patterns (i.e., some general workstation configuration rules) in the context of both traditional and flying-V layouts. A series of experiments led to the following findings: (1) the flying-V layout can save 8∼26% of travel distance compared with the traditional layout, and the sacrifice of space use is only 2∼3% for medium or large warehouses; (2) instead of solving the optimization model, the proposed 2n rule and n+1 rule are simple and easily implemented ways to locate workstations, with travel distance gaps of less than 1.5% and 5% for traditional and flying-V layouts, respectively; and (3) the “optimal” cross-aisle angle (i.e., θ) in flying-V layout can be set as large as possible as long as the cross-aisle intersects the left or right edge of the warehouse.
      Citation: Algorithms
      PubDate: 2021-06-30
      DOI: 10.3390/a14070203
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 204: A Comparative Study of Block Incomplete
           Sparse Approximate Inverses Preconditioning on Tesla K20 and V100 GPUs

    • Authors: Wenpeng Ma, Wu Yuan, Xiazhen Liu
      First page: 204
      Abstract: Incomplete Sparse Approximate Inverses (ISAI) has shown some advantages over sparse triangular solves on GPUs when it is used for the incomplete LU based preconditioner. In this paper, we extend the single GPU method for Block–ISAI to multiple GPUs algorithm by coupling Block–Jacobi preconditioner, and introduce the detailed implementation in the open source numerical package PETSc. In the experiments, two representative cases are performed and a comparative study of Block–ISAI on up to four GPUs are conducted on two major generations of NVIDIA’s GPUs (Tesla K20 and Tesla V100). Block–Jacobi preconditioning with Block–ISAI (BJPB-ISAI) shows an advantage over the level-scheduling based triangular solves from the cuSPARSE library for the cases, and the overhead of setting up Block–ISAI and the total wall clock times of GMRES is greatly reduced using Tesla V100 GPUs compared to Tesla K20 GPUs.
      Citation: Algorithms
      PubDate: 2021-06-30
      DOI: 10.3390/a14070204
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 205: Iterative Solution of Linear Matrix
           Inequalities for the Combined Control and Observer Design of Systems with
           Polytopic Parameter Uncertainty and Stochastic Noise

    • Authors: Andreas Rauh, Robert Dehnert, Swantje Romig, Sabine Lerch, Bernd Tibken
      First page: 205
      Abstract: Most research activities that utilize linear matrix inequality (LMI) techniques are based on the assumption that the separation principle of control and observer synthesis holds. This principle states that the combination of separately designed linear state feedback controllers and linear state observers, which are independently proven to be stable, results in overall stable system dynamics. However, even for linear systems, this property does not necessarily hold if polytopic parameter uncertainty and stochastic noise influence the system’s state and output equations. In this case, the control and observer design needs to be performed simultaneously to guarantee stabilization. However, the loss of the validity of the separation principle leads to nonlinear matrix inequalities instead of LMIs. For those nonlinear inequalities, the current paper proposes an iterative LMI solution procedure. If this algorithm produces a feasible solution, the resulting controller and observer gains ensure robust stability of the closed-loop control system for all possible parameter values. In addition, the proposed optimization criterion leads to a minimization of the sensitivity to stochastic noise so that the actual state trajectories converge as closely as possible to the desired operating point. The efficiency of the proposed solution approach is demonstrated by stabilizing the Zeeman catastrophe machine along the unstable branch of its bifurcation diagram. Additionally, an observer-based tracking control task is embedded into an iterative learning-type control framework.
      Citation: Algorithms
      PubDate: 2021-07-07
      DOI: 10.3390/a14070205
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 206: Energy Management of a Multi-Source Power

    • Authors: Omar Salah, Abdulrahim Shamayleh, Shayok Mukhopadhyay
      First page: 206
      Abstract: This work focuses on energy management for a system operated by multiple energy sources which include batteries, super capacitors, a hydrogen fuel cell, and a photovoltaic cell. The overall objective is to minimize the power consumption from all sources needed to satisfy the system’s power demand by optimizing the switching between the different energy sources. A dynamic mathematical model representing the energy sources is developed taking into account the different constraints on the system, i.e., primarily the state-of-charge of the battery and the super capacitors. In addition to the model, a heuristic approach is developed and compared with the mathematical model. Both approaches were tested on a multi-energy source ground robot as a prototype. The novelty of this work is that the scheduling of an energy system consisting of four different types of sources is compared by performing analysis via dynamic programming, and a heuristic approach. The results generated using both methods are analyzed and compared to a standard mode of operation. The comparison validated that the proposed approaches minimize the average power consumption across all sources. The dynamic modeling approach performs well in terms of optimization and provided a superior switching sequence, while the heuristic approach offers the definite advantages in terms of ease of implementation and simple computation requirements. Additionally, the switching sequence provided by the dynamic approach was able to meet the power demand for all simulations performed and showed that the average power consumption across all sources is minimized.
      Citation: Algorithms
      PubDate: 2021-07-07
      DOI: 10.3390/a14070206
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 207: Extended High Order Algorithms for
           Equations under the Same Set of Conditions

    • Authors: Ioannis K. Argyros, Debasis Sharma, Christopher I. Argyros, Sanjaya Kumar Parhi, Shanta Kumari Sunanda, Michael I. Argyros
      First page: 207
      Abstract: A variety of strategies are used to construct algorithms for solving equations. However, higher order derivatives are usually assumed to calculate the convergence order. More importantly, bounds on error and uniqueness regions for the solution are also not derived. Therefore, the benefits of these algorithms are limited. We simply use the first derivative to tackle all these issues and study the ball analysis for two sixth order algorithms under the same set of conditions. In addition, we present a calculable ball comparison between these algorithms. In this manner, we enhance the utility of these algorithms. Our idea is very general. That is why it can also be used to extend other algorithms as well in the same way.
      Citation: Algorithms
      PubDate: 2021-07-12
      DOI: 10.3390/a14070207
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 208: PM2.5 Concentration Prediction Based on
           CNN-BiLSTM and Attention Mechanism

    • Authors: Jinsong Zhang, Yongtao Peng, Bo Ren, Taoying Li
      First page: 208
      Abstract: The concentration of PM2.5 is an important index to measure the degree of air pollution. When it exceeds the standard value, it is considered to cause pollution and lower the air quality, which is harmful to human health and can cause a variety of diseases, i.e., asthma, chronic bronchitis, etc. Therefore, the prediction of PM2.5 concentration is helpful to reduce its harm. In this paper, a hybrid model called CNN-BiLSTM-Attention is proposed to predict the PM2.5 concentration over the next two days. First, we select the PM2.5 concentration data in hours from January 2013 to February 2017 of Shunyi District, Beijing. The auxiliary data includes air quality data and meteorological data. We use the sliding window method for preprocessing and dividing the corresponding data into a training set, a validation set, and a test set. Second, CNN-BiLSTM-Attention is composed of the convolutional neural network, bidirectional long short-term memory neural network, and attention mechanism. The parameters of this network structure are determined by the minimum error in the training process, including the size of the convolution kernel, activation function, batch size, dropout rate, learning rate, etc. We determine the feature size of the input and output by evaluating the performance of the model, finding out the best output for the next 48 h. Third, in the experimental part, we use the test set to check the performance of the proposed CNN-BiLSTM-Attention on PM2.5 prediction, which is compared by other comparison models, i.e., lasso regression, ridge regression, XGBOOST, SVR, CNN-LSTM, and CNN-BiLSTM. We conduct short-term prediction (48 h) and long-term prediction (72 h, 96 h, 120 h, 144 h), respectively. The results demonstrate that even the predictions of the next 144 h with CNN-BiLSTM-Attention is better than the predictions of the next 48 h with the comparison models in terms of mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2).
      Citation: Algorithms
      PubDate: 2021-07-13
      DOI: 10.3390/a14070208
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 209: Containment Control of First-Order
           Multi-Agent Systems under PI Coordination Protocol

    • Authors: Mingyang Huang, Chenglin Liu, Liang Shan
      First page: 209
      Abstract: This paper investigates the containment control problem of discrete-time first-order multi-agent system composed of multiple leaders and followers, and we propose a proportional-integral (PI) coordination control protocol. Assume that each follower has a directed path to one leader, and we consider several cases according to different topologies composed of the followers. Under the general directed topology that has a spanning tree, the frequency-domain analysis method is used to obtain the sufficient convergence condition for the followers achieving the containment-rendezvous that all the followers reach an agreement value in the convex hull formed by the leaders. Specially, a less conservative sufficient condition is obtained for the followers under symmetric and connected topology. Furthermore, it is proved that our proposed protocol drives the followers with unconnected topology to converge to the convex hull of the leaders. Numerical examples show the correctness of the theoretical results.
      Citation: Algorithms
      PubDate: 2021-07-14
      DOI: 10.3390/a14070209
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 210: A Multicriteria Simheuristic Approach for
           Solving a Stochastic Permutation Flow Shop Scheduling Problem

    • Authors: Eliana Maria Gonzalez-Neira, Jairo R. Montoya-Torres, Jose-Fernando Jimenez
      First page: 210
      Abstract: This paper proposes a hybridized simheuristic approach that couples a greedy randomized adaptive search procedure (GRASP), a Monte Carlo simulation, a Pareto archived evolution strategy (PAES), and an analytic hierarchy process (AHP), in order to solve a multicriteria stochastic permutation flow shop problem with stochastic processing times and stochastic sequence-dependent setup times. For the decisional criteria, the proposed approach considers four objective functions, including two quantitative and two qualitative criteria. While the expected value and the standard deviation of the earliness/tardiness of jobs are included in the quantitative criteria to address a robust solution in a just-in-time environment, this approach also includes a qualitative assessment of the product and customer importance in order to appraise a weighted priority for each job. An experimental design was carried out in several study instances of the flow shop problem to test the effects of the processing times and sequence-dependent setup times, obtained through lognormal and uniform probability distributions with three levels of coefficients of variation, settled as 0.3, 0.4, and 0.5. The results show that both probability distributions and coefficients of variation have a significant effect on the four decision criteria selected. In addition, the analytical hierarchical process makes it possible to choose the best sequence exhibited by the Pareto frontier that adjusts more adequately to the decision-makers’ objectives.
      Citation: Algorithms
      PubDate: 2021-07-14
      DOI: 10.3390/a14070210
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 211: Efficient Dynamic Cost Scheduling
           Algorithm for Financial Data Supply Chain

    • Authors: Alia Al Sadawi, Abdulrahim Shamayleh, Malick Ndiaye
      First page: 211
      Abstract: The financial data supply chain is vital to the economy, especially for banks. It affects their customer service level, therefore, it is crucial to manage the scheduling of the financial data supply chain to elevate the efficiency of banking sectors’ performance. The primary tool used in the data supply chain is data batch processing which requires efficient scheduling. This work investigates the problem of scheduling the processing of tasks with non-identical sizes and different priorities on a set of parallel processors. An iterative dynamic scheduling algorithm (DCSDBP) was developed to address the data batching process. The objective is to minimize different cost types while satisfying constraints such as resources availability, customer service level, and tasks dependency relation. The algorithm proved its effectiveness by allocating tasks with higher priority and weight while taking into consideration customers’ Service Level Agreement, time, and different types of costs, which led to a lower total cost of the batching process. The developed algorithm proved effective by testing it on an illustrative network. Also, a sensitivity analysis is conducted by varying the model parameters for networks with different sizes and complexities to study their impact on the total cost and the problem under study.
      Citation: Algorithms
      PubDate: 2021-07-14
      DOI: 10.3390/a14070211
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 212: Deep Learning Based Cardiac MRI
           Segmentation: Do We Need Experts'

    • Authors: Youssef Skandarani, Pierre-Marc Jodoin, Alain Lalande
      First page: 212
      Abstract: Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert ground truth for cardiac cine–MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. The results reveal that generalization performances of a segmentation neural network trained on non-expert ground truth data is, to all practical purposes, as good as that trained on expert ground truth data, particularly when the non-expert receives a decent level of training, highlighting an opportunity for the efficient and cost-effective creation of annotations for cardiac data sets.
      Citation: Algorithms
      PubDate: 2021-07-14
      DOI: 10.3390/a14070212
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 213: An Integrated Deep Learning and Belief
           Rule-Based Expert System for Visual Sentiment Analysis under Uncertainty

    • Authors: Sharif Noor Zisad, Etu Chowdhury, Mohammad Shahadat Hossain, Raihan Ul Islam, Karl Andersson
      First page: 213
      Abstract: Visual sentiment analysis has become more popular than textual ones in various domains for decision-making purposes. On account of this, we develop a visual sentiment analysis system, which can classify image expression. The system classifies images by taking into account six different expressions such as anger, joy, love, surprise, fear, and sadness. In our study, we propose an expert system by integrating a Deep Learning method with a Belief Rule Base (known as the BRB-DL approach) to assess an image’s overall sentiment under uncertainty. This BRB-DL approach includes both the data-driven and knowledge-driven techniques to determine the overall sentiment. Our integrated expert system outperforms the state-of-the-art methods of visual sentiment analysis with promising results. The integrated system can classify images with 86% accuracy. The system can be beneficial to understand the emotional tendency and psychological state of an individual.
      Citation: Algorithms
      PubDate: 2021-07-15
      DOI: 10.3390/a14070213
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 214: Hybrid Artificial Intelligence HFS-RF-PSO
           Model for Construction Labor Productivity Prediction and Optimization

    • Authors: Sara Ebrahimi, Aminah Robinson Fayek, Vuppuluri Sumati
      First page: 214
      Abstract: This paper presents a novel approach, using hybrid feature selection (HFS), machine learning (ML), and particle swarm optimization (PSO) to predict and optimize construction labor productivity (CLP). HFS selects factors that are most predictive of CLP to reduce the complexity of CLP data. Selected factors are used as inputs for four ML models for CLP prediction. The study results showed that random forest (RF) obtains better performance in mapping the relationship between CLP and selected factors affecting CLP, compared with the other three models. Finally, the integration of RF and PSO is developed to identify the maximum CLP value and the optimum value of each selected factor. This paper introduces a new hybrid model named HFS-RF-PSO that addresses the main limitation of existing CLP prediction studies, which is the lack of capacity to optimize CLP and its most predictive factors with respect to a construction company’s preferences, such as a targeted CLP. The major contribution of this paper is the development of the hybrid HFS-RF-PSO model as a novel approach for optimizing factors that influence CLP and identifying the maximum CLP value.
      Citation: Algorithms
      PubDate: 2021-07-15
      DOI: 10.3390/a14070214
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 215: Design of an FPGA Hardware Optimizing the
           Performance and Power Consumption of a Plenoptic Camera Depth Estimation

    • Authors: Faraz Bhatti, Thomas Greiner
      First page: 215
      Abstract: Plenoptic camera based system captures the light-field that can be exploited to estimate the 3D depth of the scene. This process generally consists of a significant number of recurrent operations, and thus requires high computation power. General purpose processor based system, due to its sequential architecture, consequently results in the problem of large execution time. A desktop graphics processing unit (GPU) can be employed to resolve this problem. However, it is an expensive solution with respect to power consumption and therefore cannot be used in mobile applications with low energy requirements. In this paper, we propose a modified plenoptic depth estimation algorithm that works on a single frame recorded by the camera and respective FPGA based hardware design. For this purpose, the algorithm is modified for parallelization and pipelining. In combination with efficient memory access, the results show good performance and lower power consumption compared to other systems.
      Citation: Algorithms
      PubDate: 2021-07-15
      DOI: 10.3390/a14070215
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 216: ArCAR: A Novel Deep Learning
           Computer-Aided Recognition for Character-Level Arabic Text Representation
           and Recognition

    • Authors: Abdullah Y. Muaad, Hanumanthappa Jayappa, Mugahed A. Al-antari, Sungyoung Lee
      First page: 216
      Abstract: Arabic text classification is a process to simultaneously categorize the different contextual Arabic contents into a proper category. In this paper, a novel deep learning Arabic text computer-aided recognition (ArCAR) is proposed to represent and recognize Arabic text at the character level. The input Arabic text is quantized in the form of 1D vectors for each Arabic character to represent a 2D array for the ArCAR system. The ArCAR system is validated over 5-fold cross-validation tests for two applications: Arabic text document classification and Arabic sentiment analysis. For document classification, the ArCAR system achieves the best performance using the Alarabiya-balance dataset in terms of overall accuracy, recall, precision, and F1-score by 97.76%, 94.08%, 94.16%, and 94.09%, respectively. Meanwhile, the ArCAR performs well for Arabic sentiment analysis, achieving the best performance using the hotel Arabic reviews dataset (HARD) balance dataset in terms of overall accuracy and F1-score by 93.58% and 93.23%, respectively. The proposed ArCAR seems to provide a practical solution for accurate Arabic text representation, understanding, and classification.
      Citation: Algorithms
      PubDate: 2021-07-16
      DOI: 10.3390/a14070216
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 217: Development of Multi-Actor Multi-Criteria
           Analysis Based on the Weight of Stakeholder Involvement in the Assessment
           of Natural–Cultural Tourism Area Transportation Policies

    • Authors: Heru Purboyo Hidayat Putro, Pradono Pradono, Titus Hari Setiawan
      First page: 217
      Abstract: Multi-actor multi-criteria analysis (MAMCA) was developed with a process involving the participation of various stakeholders. Stakeholders express various criteria as measures for the achievement of their respective goals. In general, the assessment of each stakeholder is considered to have the same weight. In reality, the weight of each stakeholder’s involvement in policy decision making is not the same. For example, the government’s assessment weight will be different from those of local business actors. In this study, the authors developed a multi-actor multi-criteria analysis method by adding the weight of stakeholder involvement when making decisions about transportation policies that support sustainable mobility in protected natural–cultural tourism areas. The weight of involvement was developed through stakeholder participation. Stakeholders were asked to provide weights for all stakeholders other than themselves using the AHP method. The results of this weighting were then averaged and considered as the stakeholder assessment weights. Adding stakeholder weighting can also improve the quality of decisions by avoiding bias and following the principle of fairness in the assessment.
      Citation: Algorithms
      PubDate: 2021-07-20
      DOI: 10.3390/a14070217
      Issue No: Vol. 14, No. 7 (2021)
  • Algorithms, Vol. 14, Pages 275: A Review of Parallel Heterogeneous
           Computing Algorithms in Power Systems

    • Authors: Diego Rodriguez, Diego Gomez, David Alvarez, Sergio Rivera
      First page: 275
      Abstract: The power system expansion and the integration of technologies, such as renewable generation, distributed generation, high voltage direct current, and energy storage, have made power system simulation challenging in multiple applications. The current computing platforms employed for planning, operation, studies, visualization, and the analysis of power systems are reaching their operational limit since the complexity and size of modern power systems results in long simulation times and high computational demand. Time reductions in simulation and analysis lead to the better and further optimized performance of power systems. Heterogeneous computing—where different processing units interact—has shown that power system applications can take advantage of the unique strengths of each type of processing unit, such as central processing units, graphics processing units, and field-programmable gate arrays interacting in on-premise or cloud environments. Parallel Heterogeneous Computing appears as an alternative to reduce simulation times by optimizing multitask execution in parallel computing architectures with different processing units working together. This paper presents a review of Parallel Heterogeneous Computing techniques, how these techniques have been applied in a wide variety of power system applications, how they help reduce the computational time of modern power system simulation and analysis, and the current tendency regarding each application. We present a wide variety of approaches classified by technique and application.
      Citation: Algorithms
      PubDate: 2021-09-23
      DOI: 10.3390/a14100275
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 276: Algorithms for Optimal Power Flow Extended
           to Controllable Renewable Systems and Loads

    • Authors: Elkin D. Reyes, Sergio Rivera
      First page: 276
      Abstract: In an effort to quantify and manage uncertainties inside power systems with penetration of renewable energy, uncertainty costs have been defined and different uncertainty cost functions have been calculated for different types of generators and electric vehicles. This article seeks to use the uncertainty cost formulation to propose algorithms and solve the problem of optimal power flow extended to controllable renewable systems and controllable loads. In a previous study, the first and second derivatives of the uncertainty cost functions were calculated and now an analytical and heuristic algorithm of optimal power flow are used. To corroborate the analytical solution, the optimal power flow was solved by means of metaheuristic algorithms. Finally, it was found that analytical algorithms have a much higher performance than metaheuristic methods, especially as the number of decision variables in an optimization problem grows.
      Citation: Algorithms
      PubDate: 2021-09-25
      DOI: 10.3390/a14100276
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 277: Rough Estimator Based Asynchronous
           Distributed Super Points Detection on High Speed Network Edge

    • Authors: Jie Xu, Wei Ding
      First page: 277
      Abstract: Super points detection plays an important role in network research and application. With the increase of network scale, distributed super points detection has become a hot research topic. The key point of super points detection in a multi-node distributed environment is how to reduce communication overhead. Therefore, this paper proposes a three-stage communication algorithm to detect super points in a distributed environment, Rough Estimator based Asynchronous Distributed super points detection algorithm (READ). READ uses a lightweight estimator, the Rough Estimator (RE), which is fast in computation and takes less memory to generate candidate super points. Meanwhile, the famous Linear Estimator (LE) is applied to accurately estimate the cardinality of each candidate super point, so as to detect the super point correctly. In READ, each node scans IP address pairs asynchronously. When reaching the time window boundary, READ starts three-stage communication to detect the super point. This paper proves that the accuracy of READ in a distributed environment is no less than that in the single-node environment. Four groups of 10 Gb/s and 40 Gb/s real-world high-speed network traffic are used to test READ. The experimental results show that READ not only has high accuracy in a distributed environment, but also has less than 5% of communication burden compared with existing algorithms.
      Citation: Algorithms
      PubDate: 2021-09-25
      DOI: 10.3390/a14100277
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 278: Ensembling EfficientNets for the
           Classification and Interpretation of Histopathology Images

    • Authors: Athanasios Kallipolitis, Kyriakos Revelos, Ilias Maglogiannis
      First page: 278
      Abstract: The extended utilization of digitized Whole Slide Images is transforming the workflow of traditional clinical histopathology to the digital era. The ongoing transformation has demonstrated major potentials towards the exploitation of Machine Learning and Deep Learning techniques as assistive tools for specialized medical personnel. While the performance of the implemented algorithms is continually boosted by the mass production of generated Whole Slide Images and the development of state-of the-art deep convolutional architectures, ensemble models provide an additional methodology towards the improvement of the prediction accuracy. Despite the earlier belief related to deep convolutional networks being treated as black boxes, important steps for the interpretation of such predictive models have also been proposed recently. However, this trend is not fully unveiled for the ensemble models. The paper investigates the application of an explanation scheme for ensemble classifiers, while providing satisfactory classification results of histopathology breast and colon cancer images in terms of accuracy. The results can be interpreted by the hidden layers’ activation of the included subnetworks and provide more accurate results than single network implementations.
      Citation: Algorithms
      PubDate: 2021-09-26
      DOI: 10.3390/a14100278
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 279: Short Communication: Optimally Solving the
           Unit-Demand Envy-Free Pricing Problem with Metric Substitutability in
           Cubic Time

    • Authors: Marcos M. Salvatierra, Mario Salvatierra, Juan G. Colonna
      First page: 279
      Abstract: In general, the unit-demand envy-free pricing problem has proven to be APX-hard, but some special cases can be optimally solved in polynomial time. When substitution costs that form a metric space are included, the problem can be solved in O(n4) time, and when the number of consumers is equal to the number of items—all with a single copy so that each consumer buys an item—a O(n3) time method is presented to solve it. This work shows that the first case has similarities with the second, and, by exploiting the structural properties of the costs set, it presents a O(n2) time algorithm for solving it when a competitive equilibrium is considered or a O(n3) time algorithm for more general scenarios. The methods are based on a dynamic programming strategy, which simplifies the calculations of the shortest paths in a network; this simplification is usually adopted in the second case. The theoretical results obtained provide efficiency in the search for optimal solutions to specific revenue management problems.
      Citation: Algorithms
      PubDate: 2021-09-26
      DOI: 10.3390/a14100279
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 280: A Brief Roadmap into Uncertain Knowledge
           Representation via Probabilistic Description Logics

    • Authors: Rafael Peñaloza
      First page: 280
      Abstract: Logic-based knowledge representation is one of the main building blocks of (logic-based) artificial intelligence. While most successful knowledge representation languages are based on classical logic, realistic intelligent applications need to handle uncertainty in an adequate manner. Over the years, many different languages for representing uncertain knowledge—often extensions of classical knowledge representation languages—have been proposed. We briefly present some of the defining properties of these languages as they pertain to the family of probabilistic description logics. This limited view is intended to help pave the way for the interested researcher to find the most adequate language for their needs, and potentially identify the remaining gaps.
      Citation: Algorithms
      PubDate: 2021-09-28
      DOI: 10.3390/a14100280
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 281: Information Fusion-Based Deep Neural
           Attentive Matrix Factorization Recommendation

    • Authors: Zhen Tian, Lamei Pan, Pu Yin, Rui Wang
      First page: 281
      Abstract: The emergence of the recommendation system has effectively alleviated the information overload problem. However, traditional recommendation systems either ignore the rich attribute information of users and items, such as the user’s social-demographic features, the item’s content features, etc., facing the sparsity problem, or adopt the fully connected network to concatenate the attribute information, ignoring the interaction between the attribute information. In this paper, we propose the information fusion-based deep neural attentive matrix factorization (IFDNAMF) recommendation model, which introduces the attribute information and adopts the element-wise product between the different information domains to learn the cross-features when conducting information fusion. In addition, the attention mechanism is utilized to distinguish the importance of different cross-features on prediction results. In addition, the IFDNAMF adopts the deep neural network to learn the high-order interaction between users and items. Meanwhile, we conduct extensive experiments on two datasets: MovieLens and Book-crossing, and demonstrate the feasibility and effectiveness of the model.
      Citation: Algorithms
      PubDate: 2021-09-28
      DOI: 10.3390/a14100281
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 282: Simultaneous Feature Selection and Support
           Vector Machine Optimization Using an Enhanced Chimp Optimization Algorithm

    • Authors: Wu, Zhang, Jia, Leng
      First page: 282
      Abstract: Chimp Optimization Algorithm (ChOA), a novel meta-heuristic algorithm, has been proposed in recent years. It divides the population into four different levels for the purpose of hunting. However, there are still some defects that lead to the algorithm falling into the local optimum. To overcome these defects, an Enhanced Chimp Optimization Algorithm (EChOA) is developed in this paper. Highly Disruptive Polynomial Mutation (HDPM) is introduced to further explore the population space and increase the population diversity. Then, the Spearman’s rank correlation coefficient between the chimps with the highest fitness and the lowest fitness is calculated. In order to avoid the local optimization, the chimps with low fitness values are introduced with Beetle Antenna Search Algorithm (BAS) to obtain visual ability. Through the introduction of the above three strategies, the ability of population exploration and exploitation is enhanced. On this basis, this paper proposes an EChOA-SVM model, which can optimize parameters while selecting the features. Thus, the maximum classification accuracy can be achieved with as few features as possible. To verify the effectiveness of the proposed method, the proposed method is compared with seven common methods, including the original algorithm. Seventeen benchmark datasets from the UCI machine learning library are used to evaluate the accuracy, number of features, and fitness of these methods. Experimental results show that the classification accuracy of the proposed method is better than the other methods on most data sets, and the number of features required by the proposed method is also less than the other algorithms.
      Citation: Algorithms
      PubDate: 2021-09-28
      DOI: 10.3390/a14100282
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 283: XGB4mcPred: Identification of DNA
           N4-Methylcytosine Sites in Multiple Species Based on an eXtreme Gradient
           Boosting Algorithm and DNA Sequence Information

    • Authors: Xiao Wang, Xi Lin, Rong Wang, Kai-Qi Fan, Li-Jun Han, Zhao-Yuan Ding
      First page: 283
      Abstract: DNA N4-methylcytosine(4mC) plays an important role in numerous biological functions and is a mechanism of particular epigenetic importance. Therefore, accurate identification of the 4mC sites in DNA sequences is necessary to understand the functional mechanism. Although some effective calculation tools have been proposed to identifying DNA 4mC sites, it is still challenging to improve identification accuracy and generalization ability. Therefore, there is a great need to build a computational tool to accurately identify the position of DNA 4mC sites. Hence, this study proposed a novel predictor XGB4mcPred, a predictor for the identification of 4mC sites trained using an extreme gradient boosting algorithm (XGBoost) and DNA sequence information. Firstly, we used the One-Hot encoding on adjacent and spaced nucleotides, dinucleotides, and trinucleotides of the original 4mC site sequences as feature vectors. Then, the importance values of the feature vectors pre-trained by the XGBoost algorithm were used as a threshold to filter redundant features, resulting in a significant improvement in the identification accuracy of the constructed XGB4mcPred predictor to identify 4mC sites. The analysis shows that there is a clear preference for nucleotide sequences between 4mC sites and non-4mC site sequences in six datasets from multiple species, and the optimized features can better distinguish 4mC sites from non-4mC sites. The experimental results of cross-validation and independent tests from six different species show that our proposed predictor XGB4mcPred significantly outperformed other state-of-the-art predictors and was improved to varying degrees compared with other state-of-the-art predictors. Additionally, the user-friendly webserver we used to developed the XGB4mcPred predictor was made freely accessible.
      Citation: Algorithms
      PubDate: 2021-09-29
      DOI: 10.3390/a14100283
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 284: FPGA-Based Linear Detection Algorithm of
           an Underground Inspection Robot

    • Authors: Chuanwei Zhang, Shirui Chen, Lu Zhao, Xianghe Li, Xiaowen Ma
      First page: 284
      Abstract: Conveyor belts are key pieces of equipment for bulk material transport, and they are of great significance to ensure safe operation. With the development of belt conveyors in the direction of long distances, large volumes, high speeds, and high reliability, the use of inspection robots to perform full inspections of belt conveyors has not only improved the efficiency and scope of the inspections but has also eliminated the dependence of the traditional method on the density of sensor arrangement. In this paper, relying on the wireless-power-supply orbital inspection robot independently developed by the laboratory, aimed at the problem of the deviation of the belt conveyor, the methods for the diagnosis of the deviation of the conveyor belt and FPGA (field-programmable gate array) parallel computing technology are studied. Based on the traditional LSD (line segment detection) algorithm, a straight-line extraction IP core, suitable for an FPGA computing platform, was constructed. This new hardware linear detection algorithm improves the real-time performance and flexibility of the belt conveyor diagnosis mechanism.
      Citation: Algorithms
      PubDate: 2021-09-29
      DOI: 10.3390/a14100284
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 285: Efficient and Portable Distribution
           Modeling for Large-Scale Scientific Data Processing with Data-Parallel

    • Authors: Hao-Yi Yang, Zhi-Rong Lin, Ko-Chih Wang
      First page: 285
      Abstract: The use of distribution-based data representation to handle large-scale scientific datasets is a promising approach. Distribution-based approaches often transform a scientific dataset into many distributions, each of which is calculated from a small number of samples. Most of the proposed parallel algorithms focus on modeling single distributions from many input samples efficiently, but these may not fit the large-scale scientific data processing scenario because they cannot utilize computing resources effectively. Histograms and the Gaussian Mixture Model (GMM) are the most popular distribution representations used to model scientific datasets. Therefore, we propose the use of multi-set histogram and GMM modeling algorithms for the scenario of large-scale scientific data processing. Our algorithms are developed by data-parallel primitives to achieve portability across different hardware architectures. We evaluate the performance of the proposed algorithms in detail and demonstrate use cases for scientific data processing.
      Citation: Algorithms
      PubDate: 2021-09-29
      DOI: 10.3390/a14100285
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 286: Enhanced Hyper-Cube Framework Ant Colony
           Optimization for Combinatorial Optimization Problems

    • Authors: Ali Ahmid, Thien-My Dao, Ngan Van Le
      First page: 286
      Abstract: Solving of combinatorial optimization problems is a common practice in real-life engineering applications. Trusses, cranes, and composite laminated structures are some good examples that fall under this category of optimization problems. Those examples have a common feature of discrete design domain that turn them into a set of NP-hard optimization problems. Determining the right optimization algorithm for such problems is a precious point that tends to impact the overall cost of the design process. Furthermore, reinforcing the performance of a prospective optimization algorithm reduces the design cost. In the current study, a comprehensive assessment criterion has been developed to assess the performance of meta-heuristic (MH) solutions in the domain of structural design. Thereafter, the proposed criterion was employed to compare five different variants of Ant Colony Optimization (ACO). It was done by using a well-known structural optimization problem of laminate Stacking Sequence Design (SSD). The initial results of the comparison study reveal that the Hyper-Cube Framework (HCF) ACO variant outperforms the others. Consequently, an investigation of further improvement led to introducing an enhanced version of HCFACO (or EHCFACO). Eventually, the performance assessment of the EHCFACO variant showed that the average practical reliability became more than twice that of the standard ACO, and the normalized price decreased more to hold at 28.92 instead of 51.17.
      Citation: Algorithms
      PubDate: 2021-09-29
      DOI: 10.3390/a14100286
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 287: Closed-Loop Cognitive-Driven Gain Control
           of Competing Sounds Using Auditory Attention Decoding

    • Authors: Ali Aroudi, Eghart Fischer, Maja Serman, Henning Puder, Simon Doclo
      First page: 287
      Abstract: Recent advances have shown that it is possible to identify the target speaker which a listener is attending to using single-trial EEG-based auditory attention decoding (AAD). Most AAD methods have been investigated for an open-loop scenario, where AAD is performed in an offline fashion without presenting online feedback to the listener. In this work, we aim at developing a closed-loop AAD system that allows to enhance a target speaker, suppress an interfering speaker and switch attention between both speakers. To this end, we propose a cognitive-driven adaptive gain controller (AGC) based on real-time AAD. Using the EEG responses of the listener and the speech signals of both speakers, the real-time AAD generates probabilistic attention measures, based on which the attended and the unattended speaker are identified. The AGC then amplifies the identified attended speaker and attenuates the identified unattended speaker, which are presented to the listener via loudspeakers. We investigate the performance of the proposed system in terms of the decoding performance and the signal-to-interference ratio (SIR) improvement. The experimental results show that, although there is a significant delay to detect attention switches, the proposed system is able to improve the SIR between the attended and the unattended speaker. In addition, no significant difference in decoding performance is observed between closed-loop AAD and open-loop AAD. The subjective evaluation results show that the proposed closed-loop cognitive-driven system demands a similar level of cognitive effort to follow the attended speaker, to ignore the unattended speaker and to switch attention between both speakers compared to using open-loop AAD. Closed-loop AAD in an online fashion is feasible and enables the listener to interact with the AGC.
      Citation: Algorithms
      PubDate: 2021-09-30
      DOI: 10.3390/a14100287
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 288: Machine Learning-Based Prediction of the
           Seismic Bearing Capacity of a Shallow Strip Footing over a Void in
           Heterogeneous Soils

    • Authors: Mohammad Sadegh Es-haghi, Mohsen Abbaspour, Hamidreza Abbasianjahromi, Stefano Mariani
      First page: 288
      Abstract: The seismic bearing capacity of a shallow strip footing above a void displays a complex dependence on several characteristics, linked to geometric problems and to the soil properties. Hence, setting analytical models to estimate such bearing capacity is extremely challenging. In this work, machine learning (ML) techniques have been employed to predict the seismic bearing capacity of a shallow strip footing located over a single unsupported rectangular void in heterogeneous soil. A dataset consisting of 38,000 finite element limit analysis simulations has been created, and the mean value between the upper and lower bounds of the bearing capacity has been computed at the varying undrained shear strength and internal friction angle of the soil, horizontal earthquake accelerations, and position, shape, and size of the void. Three machine learning techniques have been adopted to learn the link between the aforementioned parameters and the bearing capacity: multilayer perceptron neural networks; a group method of data handling; and a combined adaptive-network-based fuzzy inference system and particle swarm optimization. The performances of these ML techniques have been compared with each other, in terms of the following statistical performance indices: coefficient of determination (); root mean square error (); mean absolute percentage error; scatter index; and standard bias. Results have shown that all the ML techniques perform well, though the multilayer perceptron has a slightly superior accuracy featuring noteworthy results (0.9955 and 0.0158).
      Citation: Algorithms
      PubDate: 2021-09-30
      DOI: 10.3390/a14100288
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 289: Comparing Commit Messages and Source Code
           Metrics for the Prediction Refactoring Activities

    • Authors: Priyadarshni Suresh Sagar, Eman Abdulah AlOmar, Mohamed Wiem Mkaouer, Ali Ouni, Christian D. Newman
      First page: 289
      Abstract: Understanding how developers refactor their code is critical to support the design improvement process of software. This paper investigates to what extent code metrics are good indicators for predicting refactoring activity in the source code. In order to perform this, we formulated the prediction of refactoring operation types as a multi-class classification problem. Our solution relies on measuring metrics extracted from committed code changes in order to extract the corresponding features (i.e., metric variations) that better represent each class (i.e., refactoring type) in order to automatically predict, for a given commit, the method-level type of refactoring being applied, namely Move Method, Rename Method, Extract Method, Inline Method, Pull-up Method, and Push-down Method. We compared various classifiers, in terms of their prediction performance, using a dataset of 5004 commits and extracted 800 Java projects. Our main findings show that the random forest model trained with code metrics resulted in the best average accuracy of 75%. However, we detected a variation in the results per class, which means that some refactoring types are harder to detect than others.
      Citation: Algorithms
      PubDate: 2021-09-30
      DOI: 10.3390/a14100289
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 290: Fine-Grained Pests Recognition Based on
           Truncated Probability Fusion Network via Internet of Things in Forestry
           and Agricultural Scenes

    • Authors: Kai Ma, Ming-Jun Nie, Sen Lin, Jianlei Kong, Cheng-Cai Yang, Jinhao Liu
      First page: 290
      Abstract: Accurate identification of insect pests is the key to improve crop yield and ensure quality and safety. However, under the influence of environmental conditions, the same kind of pests show obvious differences in intraclass representation, while the different kinds of pests show slight similarities. The traditional methods have been difficult to deal with fine-grained identification of pests, and their practical deployment is low. In order to solve this problem, this paper uses a variety of equipment terminals in the agricultural Internet of Things to obtain a large number of pest images and proposes a fine-grained identification model of pests based on probability fusion network FPNT. This model designs a fine-grained feature extractor based on an optimized CSPNet backbone network, mining different levels of local feature expression that can distinguish subtle differences. After the integration of the NetVLAD aggregation layer, the gated probability fusion layer gives full play to the advantages of information complementarity and confidence coupling of multi-model fusion. The comparison test shows that the PFNT model has an average recognition accuracy of 93.18% for all kinds of pests, and its performance is better than other deep-learning methods, with the average processing time drop to 61 ms, which can meet the needs of fine-grained image recognition of pests in the Internet of Things in agricultural and forestry practice, and provide technical application reference for intelligent early warning and prevention of pests.
      Citation: Algorithms
      PubDate: 2021-09-30
      DOI: 10.3390/a14100290
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 291: An Algorithm for Making Regime-Changing
           Markov Decisions

    • Authors: Juri Hinz
      First page: 291
      Abstract: In industrial applications, the processes of optimal sequential decision making are naturally formulated and optimized within a standard setting of Markov decision theory. In practice, however, decisions must be made under incomplete and uncertain information about parameters and transition probabilities. This situation occurs when a system may suffer a regime switch changing not only the transition probabilities but also the control costs. After such an event, the effect of the actions may turn to the opposite, meaning that all strategies must be revised. Due to practical importance of this problem, a variety of methods has been suggested, ranging from incorporating regime switches into Markov dynamics to numerous concepts addressing model uncertainty. In this work, we suggest a pragmatic and practical approach using a natural re-formulation of this problem as a so-called convex switching system, we make efficient numerical algorithms applicable.
      Citation: Algorithms
      PubDate: 2021-10-04
      DOI: 10.3390/a14100291
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 292: Utilizing the Particle Swarm Optimization
           Algorithm for Determining Control Parameters for Civil Structures Subject
           to Seismic Excitation

    • Authors: Courtney A. Peckens, Andrea Alsgaard, Camille Fogg, Mary C. Ngoma, Clara Voskuil
      First page: 292
      Abstract: Structural control of civil infrastructure in response to large external loads, such as earthquakes or wind, is not widely employed due to challenges regarding information exchange and the inherent latencies in the system due to complex computations related to the control algorithm. This study employs front-end signal processing at the sensing node to alleviate computations at the control node and results in a simplistic sum of weighted inputs to determine a control force. The control law simplifies to U = WP, where U is the control force, W is a pre-determined weight matrix, and P is a deconstructed representation of the response of the structure to the applied excitation. Determining the optimal weight matrix for this calculation is non-trivial and this study uses the particle swarm optimization (PSO) algorithm with a modified homing feature to converge on a possible solution. To further streamline the control algorithm, various pruning techniques are combined with the PSO algorithm in order to optimize the number of entries in the weight matrix. These optimization techniques are applied in simulation to a five-story structure and the success of the resulting control parameters are quantified based on their ability to minimize the information exchange while maintaining control effectiveness. It is found that a magnitude-based pruning method, when paired with the PSO algorithm, is able to offer the most effective control for a structure subject to seismic base excitation.
      Citation: Algorithms
      PubDate: 2021-10-08
      DOI: 10.3390/a14100292
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 293: A Unified Formulation of Analytical and
           Numerical Methods for Solving Linear Fredholm Integral Equations

    • Authors: Efthimios Providas
      First page: 293
      Abstract: This article is concerned with the construction of approximate analytic solutions to linear Fredholm integral equations of the second kind with general continuous kernels. A unified treatment of some classes of analytical and numerical classical methods, such as the Direct Computational Method (DCM), the Degenerate Kernel Methods (DKM), the Quadrature Methods (QM) and the Projection Methods (PM), is proposed. The problem is formulated as an abstract equation in a Banach space and a solution formula is derived. Then, several approximating schemes are discussed. In all cases, the method yields an explicit, albeit approximate, solution. Several examples are solved to illustrate the performance of the technique.
      Citation: Algorithms
      PubDate: 2021-10-10
      DOI: 10.3390/a14100293
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 294: Globally Optimizing QAOA Circuit Depth for
           Constrained Optimization Problems

    • Authors: Rebekah Herrman, Lorna Treffert, James Ostrowski, Phillip C. Lotshaw, Travis S. Humble, George Siopsis
      First page: 294
      Abstract: We develop a global variable substitution method that reduces n-variable monomials in combinatorial optimization problems to equivalent instances with monomials in fewer variables. We apply this technique to 3-SAT and analyze the optimal quantum unitary circuit depth needed to solve the reduced problem using the quantum approximate optimization algorithm. For benchmark 3-SAT problems, we find that the upper bound of the unitary circuit depth is smaller when the problem is formulated as a product and uses the substitution method to decompose gates than when the problem is written in the linear formulation, which requires no decomposition.
      Citation: Algorithms
      PubDate: 2021-10-11
      DOI: 10.3390/a14100294
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 295: Ant Colony Optimization with Warm-Up

    • Authors: Mattia Neroni
      First page: 295
      Abstract: The Ant Colony Optimization (ACO) is a probabilistic technique inspired by the behavior of ants for solving computational problems that may be reduced to finding the best path through a graph. Some species of ants deposit pheromone on the ground to mark some favorable paths that should be used by other members of the colony. Ant colony optimization implements a similar mechanism for solving optimization problems. In this paper a warm-up procedure for the ACO is proposed. During the warm-up, the pheromone matrix is initialized to provide an efficient new starting point for the algorithm, so that it can obtain the same (or better) results with fewer iterations. The warm-up is based exclusively on the graph, which, in most applications, is given and does not need to be recalculated every time before executing the algorithm. In this way, it can be made only once, and it speeds up the algorithm every time it is used from then on. The proposed solution is validated on a set of traveling salesman problem instances, and in the simulation of a real industrial application for the routing of pickers in a manual warehouse. During the validation, it is compared with other ACO adopting a pheromone initialization technique, and the results show that, in most cases, the adoption of the proposed warm-up allows the ACO to obtain the same or better results with fewer iterations.
      Citation: Algorithms
      PubDate: 2021-10-12
      DOI: 10.3390/a14100295
      Issue No: Vol. 14, No. 10 (2021)
  • Algorithms, Vol. 14, Pages 296: Genz and Mendell-Elston Estimation of the
           High-Dimensional Multivariate Normal Distribution

    • Authors: Lucy Blondell, Mark Z. Kos, John Blangero, Harald H. H. Göring
      First page: 296
      Abstract: Statistical analysis of multinomial data in complex datasets often requires estimation of the multivariate normal (mvn) distribution for models in which the dimensionality can easily reach 10–1000 and higher. Few algorithms for estimating the mvn distribution can offer robust and efficient performance over such a range of dimensions. We report a simulation-based comparison of two algorithms for the mvn that are widely used in statistical genetic applications. The venerable Mendell-Elston approximation is fast but execution time increases rapidly with the number of dimensions, estimates are generally biased, and an error bound is lacking. The correlation between variables significantly affects absolute error but not overall execution time. The Monte Carlo-based approach described by Genz returns unbiased and error-bounded estimates, but execution time is more sensitive to the correlation between variables. For ultra-high-dimensional problems, however, the Genz algorithm exhibits better scale characteristics and greater time-weighted efficiency of estimation.
      Citation: Algorithms
      PubDate: 2021-10-14
      DOI: 10.3390/a14100296
      Issue No: Vol. 14, No. 10 (2021)
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762

Your IP address:
Home (Search)
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-