Subjects -> MATHEMATICS (Total: 1100 journals)
    - APPLIED MATHEMATICS (88 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (812 journals)
    - MATHEMATICS (GENERAL) (43 journals)
    - NUMERICAL ANALYSIS (24 journals)
    - PROBABILITIES AND MATH STATISTICS (110 journals)

MATHEMATICS (812 journals)                  1 2 3 4 5 | Last

Showing 1 - 200 of 538 Journals sorted alphabetically
Abakós     Open Access   (Followers: 5)
Abhandlungen aus dem Mathematischen Seminar der Universitat Hamburg     Hybrid Journal   (Followers: 4)
Academic Voices : A Multidisciplinary Journal     Open Access   (Followers: 2)
Accounting Perspectives     Full-text available via subscription   (Followers: 7)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 16)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 3)
ACM Transactions on Mathematical Software (TOMS)     Hybrid Journal   (Followers: 6)
ACS Applied Materials & Interfaces     Hybrid Journal   (Followers: 39)
Acta Applicandae Mathematicae     Hybrid Journal   (Followers: 1)
Acta Mathematica     Hybrid Journal   (Followers: 13)
Acta Mathematica Hungarica     Hybrid Journal   (Followers: 2)
Acta Mathematica Scientia     Full-text available via subscription   (Followers: 5)
Acta Mathematica Sinica, English Series     Hybrid Journal   (Followers: 6)
Acta Mathematica Vietnamica     Hybrid Journal  
Acta Mathematicae Applicatae Sinica, English Series     Hybrid Journal  
Advanced Science Letters     Full-text available via subscription   (Followers: 12)
Advances in Applied Clifford Algebras     Hybrid Journal   (Followers: 4)
Advances in Calculus of Variations     Hybrid Journal   (Followers: 6)
Advances in Catalysis     Full-text available via subscription   (Followers: 5)
Advances in Complex Systems     Hybrid Journal   (Followers: 10)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 23)
Advances in Decision Sciences     Open Access   (Followers: 4)
Advances in Difference Equations     Open Access   (Followers: 3)
Advances in Fixed Point Theory     Open Access   (Followers: 8)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 19)
Advances in Linear Algebra & Matrix Theory     Open Access   (Followers: 11)
Advances in Materials Science     Open Access   (Followers: 19)
Advances in Mathematical Physics     Open Access   (Followers: 8)
Advances in Mathematics     Full-text available via subscription   (Followers: 17)
Advances in Nonlinear Analysis     Open Access   (Followers: 1)
Advances in Numerical Analysis     Open Access   (Followers: 9)
Advances in Operations Research     Open Access   (Followers: 13)
Advances in Operator Theory     Hybrid Journal   (Followers: 4)
Advances in Porous Media     Full-text available via subscription   (Followers: 5)
Advances in Pure and Applied Mathematics     Hybrid Journal   (Followers: 10)
Advances in Pure Mathematics     Open Access   (Followers: 11)
Advances in Science and Research (ASR)     Open Access   (Followers: 9)
Aequationes Mathematicae     Hybrid Journal   (Followers: 2)
African Journal of Educational Studies in Mathematics and Sciences     Full-text available via subscription   (Followers: 9)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 7)
Afrika Matematika     Hybrid Journal   (Followers: 3)
Air, Soil & Water Research     Open Access   (Followers: 13)
AKSIOMA Journal of Mathematics Education     Open Access   (Followers: 3)
AKSIOMATIK : Jurnal Penelitian Pendidikan dan Pembelajaran Matematika     Open Access   (Followers: 1)
Al-Jabar : Jurnal Pendidikan Matematika     Open Access   (Followers: 1)
Al-Qadisiyah Journal for Computer Science and Mathematics     Open Access   (Followers: 1)
AL-Rafidain Journal of Computer Sciences and Mathematics     Open Access   (Followers: 6)
Algebra and Logic     Hybrid Journal   (Followers: 7)
Algebra Colloquium     Hybrid Journal   (Followers: 4)
Algebra Universalis     Hybrid Journal   (Followers: 2)
Algorithmic Operations Research     Open Access   (Followers: 5)
Algorithms     Open Access   (Followers: 12)
Algorithms Research     Open Access   (Followers: 1)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 10)
American Journal of Mathematical Analysis     Open Access   (Followers: 2)
American Journal of Mathematical and Management Sciences     Hybrid Journal   (Followers: 1)
American Journal of Mathematics     Full-text available via subscription   (Followers: 7)
American Journal of Operations Research     Open Access   (Followers: 8)
American Mathematical Monthly     Full-text available via subscription   (Followers: 6)
An International Journal of Optimization and Control: Theories & Applications     Open Access   (Followers: 11)
Anadol University Journal of Science and Technology B : Theoritical Sciences     Open Access  
Analele Universitatii Ovidius Constanta - Seria Matematica     Open Access  
Analysis and Applications     Hybrid Journal   (Followers: 1)
Analysis and Mathematical Physics     Hybrid Journal   (Followers: 6)
Analysis Mathematica     Full-text available via subscription  
Analysis. International mathematical journal of analysis and its applications     Hybrid Journal   (Followers: 5)
Anargya : Jurnal Ilmiah Pendidikan Matematika     Open Access   (Followers: 3)
Annales Mathematicae Silesianae     Open Access   (Followers: 2)
Annales mathématiques du Québec     Hybrid Journal   (Followers: 4)
Annales Universitatis Mariae Curie-Sklodowska, sectio A – Mathematica     Open Access   (Followers: 1)
Annales Universitatis Paedagogicae Cracoviensis. Studia Mathematica     Open Access  
Annali di Matematica Pura ed Applicata     Hybrid Journal   (Followers: 1)
Annals of Combinatorics     Hybrid Journal   (Followers: 4)
Annals of Data Science     Hybrid Journal   (Followers: 13)
Annals of Discrete Mathematics     Full-text available via subscription   (Followers: 8)
Annals of Functional Analysis     Hybrid Journal   (Followers: 4)
Annals of Mathematics     Full-text available via subscription   (Followers: 3)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 14)
Annals of PDE     Hybrid Journal   (Followers: 1)
Annals of Pure and Applied Logic     Open Access   (Followers: 4)
Annals of the Alexandru Ioan Cuza University - Mathematics     Open Access  
Annals of the Institute of Statistical Mathematics     Hybrid Journal   (Followers: 1)
Annals of West University of Timisoara - Mathematics     Open Access  
Annals of West University of Timisoara - Mathematics and Computer Science     Open Access   (Followers: 2)
Annuaire du Collège de France     Open Access   (Followers: 6)
ANZIAM Journal     Open Access   (Followers: 1)
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 3)
Applications of Mathematics     Hybrid Journal   (Followers: 3)
Applied Categorical Structures     Hybrid Journal   (Followers: 4)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 14)
Applied Mathematics     Open Access   (Followers: 4)
Applied Mathematics     Open Access   (Followers: 8)
Applied Mathematics & Optimization     Hybrid Journal   (Followers: 10)
Applied Mathematics - A Journal of Chinese Universities     Hybrid Journal   (Followers: 1)
Applied Mathematics and Nonlinear Sciences     Open Access  
Applied Mathematics Letters     Full-text available via subscription   (Followers: 4)
Applied Mathematics Research eXpress     Hybrid Journal   (Followers: 1)
Applied Network Science     Open Access   (Followers: 3)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 5)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 6)
Arab Journal of Mathematical Sciences     Open Access   (Followers: 4)
Arabian Journal of Mathematics     Open Access   (Followers: 2)
Archive for Mathematical Logic     Hybrid Journal   (Followers: 3)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 6)
Archive of Numerical Software     Open Access  
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 6)
Arkiv för Matematik     Hybrid Journal   (Followers: 1)
Armenian Journal of Mathematics     Open Access   (Followers: 1)
Arnold Mathematical Journal     Hybrid Journal   (Followers: 1)
Artificial Satellites     Open Access   (Followers: 24)
Asia-Pacific Journal of Operational Research     Hybrid Journal   (Followers: 3)
Asian Journal of Algebra     Open Access   (Followers: 1)
Asian Research Journal of Mathematics     Open Access   (Followers: 1)
Asian-European Journal of Mathematics     Hybrid Journal   (Followers: 3)
Australian Mathematics Teacher, The     Full-text available via subscription   (Followers: 7)
Australian Primary Mathematics Classroom     Full-text available via subscription   (Followers: 5)
Australian Senior Mathematics Journal     Full-text available via subscription   (Followers: 2)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Axioms     Open Access   (Followers: 1)
Baltic International Yearbook of Cognition, Logic and Communication     Open Access   (Followers: 2)
Banach Journal of Mathematical Analysis     Hybrid Journal   (Followers: 2)
Basin Research     Hybrid Journal   (Followers: 5)
BIBECHANA     Open Access   (Followers: 2)
Biomath     Open Access  
BIT Numerical Mathematics     Hybrid Journal   (Followers: 1)
Boletim Cearense de Educação e História da Matemática     Open Access  
Boletim de Educação Matemática     Open Access  
Boletín de la Sociedad Matemática Mexicana     Hybrid Journal  
Bollettino dell'Unione Matematica Italiana     Full-text available via subscription   (Followers: 2)
British Journal of Mathematical and Statistical Psychology     Full-text available via subscription   (Followers: 18)
Bruno Pini Mathematical Analysis Seminar     Open Access  
Buletinul Academiei de Stiinte a Republicii Moldova. Matematica     Open Access   (Followers: 13)
Bulletin des Sciences Mathamatiques     Full-text available via subscription   (Followers: 4)
Bulletin of Dnipropetrovsk University. Series : Communications in Mathematical Modeling and Differential Equations Theory     Open Access   (Followers: 3)
Bulletin of Mathematical Sciences     Open Access   (Followers: 1)
Bulletin of Symbolic Logic     Full-text available via subscription   (Followers: 2)
Bulletin of the Australian Mathematical Society     Full-text available via subscription   (Followers: 2)
Bulletin of the Brazilian Mathematical Society, New Series     Hybrid Journal  
Bulletin of the Iranian Mathematical Society     Hybrid Journal  
Bulletin of the London Mathematical Society     Hybrid Journal   (Followers: 3)
Bulletin of the Malaysian Mathematical Sciences Society     Hybrid Journal  
Cadernos do IME : Série Matemática     Open Access   (Followers: 1)
Calculus of Variations and Partial Differential Equations     Hybrid Journal  
Canadian Journal of Mathematics / Journal canadien de mathématiques     Hybrid Journal  
Canadian Journal of Science, Mathematics and Technology Education     Hybrid Journal   (Followers: 22)
Canadian Mathematical Bulletin     Hybrid Journal  
Carpathian Mathematical Publications     Open Access   (Followers: 1)
Catalysis in Industry     Hybrid Journal   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 3)
CHANCE     Hybrid Journal   (Followers: 5)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 3)
Chaos, Solitons & Fractals : X     Open Access  
ChemSusChem     Hybrid Journal   (Followers: 8)
Chinese Annals of Mathematics, Series B     Hybrid Journal  
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
Chinese Journal of Mathematics     Open Access  
Ciencia     Open Access   (Followers: 1)
Clean Air Journal     Full-text available via subscription   (Followers: 1)
CODEE Journal     Open Access   (Followers: 3)
Cogent Mathematics     Open Access   (Followers: 2)
Cognitive Computation     Hybrid Journal   (Followers: 3)
Collectanea Mathematica     Hybrid Journal  
College Mathematics Journal     Hybrid Journal   (Followers: 4)
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 4)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 15)
Commentarii Mathematici Helvetici     Hybrid Journal  
Communications in Advanced Mathematical Sciences     Open Access  
Communications in Combinatorics and Optimization     Open Access  
Communications in Contemporary Mathematics     Hybrid Journal  
Communications in Mathematical Physics     Hybrid Journal   (Followers: 4)
Communications On Pure & Applied Mathematics     Hybrid Journal   (Followers: 4)
Complex Analysis and its Synergies     Open Access   (Followers: 3)
Complex Variables and Elliptic Equations: An International Journal     Hybrid Journal  
Composite Materials Series     Full-text available via subscription   (Followers: 9)
Compositio Mathematica     Full-text available via subscription  
Comptes Rendus Mathematique     Full-text available via subscription  
Computational and Applied Mathematics     Hybrid Journal   (Followers: 4)
Computational and Mathematical Methods     Hybrid Journal  
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 2)
Computational Complexity     Hybrid Journal   (Followers: 4)
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 9)
Computational Mechanics     Hybrid Journal   (Followers: 5)
Computational Methods and Function Theory     Hybrid Journal  
Computational Optimization and Applications     Hybrid Journal   (Followers: 9)
Computers & Mathematics with Applications     Full-text available via subscription   (Followers: 11)
Concrete Operators     Open Access   (Followers: 4)
Confluentes Mathematici     Hybrid Journal  
Contributions to Discrete Mathematics     Open Access   (Followers: 2)
Contributions to Game Theory and Management     Open Access  
COSMOS     Hybrid Journal  
Cryptography and Communications     Hybrid Journal   (Followers: 13)
Cuadernos de Investigación y Formación en Educación Matemática     Open Access  
Cubo. A Mathematical Journal     Open Access  
Current Research in Biostatistics     Open Access   (Followers: 8)
Czechoslovak Mathematical Journal     Hybrid Journal   (Followers: 1)
Daya Matematis : Jurnal Inovasi Pendidikan Matematika     Open Access   (Followers: 2)
Demographic Research     Open Access   (Followers: 15)
Demonstratio Mathematica     Open Access  

        1 2 3 4 5 | Last

Similar Journals
Journal Cover
Algorithms
Journal Prestige (SJR): 0.217
Citation Impact (citeScore): 1
Number of Followers: 12  

  This is an Open Access Journal Open Access journal
ISSN (Print) 1999-4893
Published by MDPI Homepage  [230 journals]
  • Algorithms, Vol. 13, Pages 155: Embedded Bayesian Network Contribution for
           a Safe Missing Planning of Autonomous Vehicles

    • Authors: Catherine Dezan, Sara Zermani, Chabha Hireche
      First page: 155
      Abstract: Bayesian Networks (BN) are probabilistic models that are commonly used for the diagnosis in numerous domains (medicine, finance, transport, robotics, …). In the case of autonomous vehicles, they can contribute to elaborate intelligent monitors that can take the environmental context into account. We show in this paper some main abilities of BN that can help in the elaboration of fault detection isolation and recovery (FDIR) modules. One of the main difficulty with the BN model is generally to elaborate these ones according to the case of study. Then, we propose some automatic generation techniques from failure mode and effects analysis (FMEA)-like tables using the pattern design approach. Once defined, these modules have to operate online for autonomous vehicles. In a second part, we propose a design methodology to embed the real-time and non-intrusive implementations of the BN modules using FPGA-SoC support. We show that the FPGA implementation can offer an interesting speed-up with very limited energy cost. Lastly, we show how these BN modules can be incorporated into the decision-making model for the mission planning of unmanned aerial vehicles (UAVs). We illustrate the integration by means of two models: the Decision Network model that is a straightforward extension of the BN model, and the BFM model that is an extension of the Markov Decision Process (MDP) decision-making model incorporating a BN. We illustrate the different proposals with realistic examples and show that the hybrid implementation on FPGA-SoC can offer some benefits.
      Citation: Algorithms
      PubDate: 2020-06-28
      DOI: 10.3390/a13070155
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 156: Generalized Polynomial Chaos Expansion for
           Fast and Accurate Uncertainty Quantification in Geomechanical Modelling

    • Authors: Claudia Zoccarato, Laura Gazzola, Massimiliano Ferronato, Pietro Teatini
      First page: 156
      Abstract: Geomechanical modelling of the processes associated to the exploitation of subsurface resources, such as land subsidence or triggered/induced seismicity, is a common practice of major interest. The prediction reliability depends on different sources of uncertainty, such as the parameterization of the constitutive model characterizing the deep rock behaviour. In this study, we focus on a Sobol’-based sensitivity analysis and uncertainty reduction via assimilation of land deformations. A synthetic test case application on a deep hydrocarbon reservoir is considered, where land settlements are predicted with the aid of a 3-D Finite Element (FE) model. Data assimilation is performed via the Ensemble Smoother (ES) technique and its variation in the form of Multiple Data Assimilation (ES-MDA). However, the ES convergence is guaranteed with a large number of Monte Carlo (MC) simulations, that may be computationally infeasible in large scale and complex systems. For this reason, a surrogate model based on the generalized Polynomial Chaos Expansion (gPCE) is proposed as an approximation of the forward problem. This approach allows to efficiently compute the Sobol’ indices for the sensitivity analysis and greatly reduce the computational cost of the original ES and MDA formulations, also enhancing the accuracy of the overall prediction process.
      Citation: Algorithms
      PubDate: 2020-06-30
      DOI: 10.3390/a13070156
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 157: The RONO (Rank-Order-Normalization)
           Procedure for Power-Spectrum Analysis of Datasets with Non-Normal
           Distributions

    • Authors: Sturrock, Scholkmann
      First page: 157
      Abstract: Standard (Lomb-Scargle, likelihood, etc.) procedures for power-spectrum analysis provide convenient estimates of the significance of any peak in a power spectrum, based—typically—on the assumption that the measurements being analyzed have a normal (i.e. Gaussian) distribution. However, the measurement sequence provided by a real experiment or a real observational program may not meet this requirement. The RONO (rank-order normalization) procedure generates a proxy distribution that retains the rank-order of the original measurements but has a strictly normal distribution. The proxy distribution may then be analyzed by standard power-spectrum analysis. We show by an example that the resulting power spectrum may prove to be quite close to the power spectrum obtained from the original data by a standard procedure, even if the distribution of the original measurements is far from normal. Such a comparison would tend to validate the original analysis.
      Citation: Algorithms
      PubDate: 2020-06-30
      DOI: 10.3390/a13070157
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 158: Fuzzy C-Means Clustering Algorithm with
           Multiple Fuzzification Coefficients

    • Authors: Khang, Vuong, Tran, Fowler
      First page: 158
      Abstract: Clustering is an unsupervised machine learning technique with many practical applications that has gathered extensive research interest. Aside from deterministic or probabilistic techniques, fuzzy C-means clustering (FCM) is also a common clustering technique. Since the advent of the FCM method, many improvements have been made to increase clustering efficiency. These improvements focus on adjusting the membership representation of elements in the clusters, or on fuzzifying and defuzzifying techniques, as well as the distance function between elements. This study proposes a novel fuzzy clustering algorithm using multiple different fuzzification coefficients depending on the characteristics of each data sample. The proposed fuzzy clustering method has similar calculation steps to FCM with some modifications. The formulas are derived to ensure convergence. The main contribution of this approach is the utilization of multiple fuzzification coefficients as opposed to only one coefficient in the original FCM algorithm. The new algorithm is then evaluated with experiments on several common datasets and the results show that the proposed algorithm is more efficient compared to the original FCM as well as other clustering methods.
      Citation: Algorithms
      PubDate: 2020-06-30
      DOI: 10.3390/a13070158
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 159: Stream-Based Lossless Data Compression
           Applying Adaptive Entropy Coding for Hardware-Based Implementation

    • Authors: Shinichi Yamagiwa, Hayakawa, Marumo
      First page: 159
      Abstract: Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the current I/O performance does not improve so drastically, it is the time to consider another way to increase it. To overcome this challenge, we focus on lossless data compression technology to decrease the amount of data itself in the data communication path. The recent Big Data applications treat data stream that flows continuously and never allow stalling processing due to the high speed. Therefore, an elegant hardware-based data compression technology is demanded. This paper proposes a novel lossless data compression, called ASE coding. It encodes streaming data by applying the entropy coding approach. ASE coding instantly assigns the fewest bits to the corresponding compressed data according to the number of occupied entries in a look-up table. This paper describes the detailed mechanism of ASE coding. Furthermore, the paper demonstrates performance evaluations to promise that ASE coding adaptively shrinks streaming data and also works on a small amount of hardware resources without stalling or buffering any part of data stream.
      Citation: Algorithms
      PubDate: 2020-06-30
      DOI: 10.3390/a13070159
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 160: Text Semantic Annotation: A Distributed
           Methodology Based on Community Coherence

    • Authors: Christos Makris, Georgios Pispirigos, Michael Angelos Simos
      First page: 160
      Abstract: Text annotation is the process of identifying the sense of a textual segment within a given context to a corresponding entity on a concept ontology. As the bag of words paradigm’s limitations become increasingly discernible in modern applications, several information retrieval and artificial intelligence tasks are shifting to semantic representations for addressing the inherent natural language polysemy and homonymy challenges. With extensive application in a broad range of scientific fields, such as digital marketing, bioinformatics, chemical engineering, neuroscience, and social sciences, community detection has attracted great scientific interest. Focusing on linguistics, by aiming to identify groups of densely interconnected subgroups of semantic ontologies, community detection application has proven beneficial in terms of disambiguation improvement and ontology enhancement. In this paper we introduce a novel distributed supervised knowledge-based methodology employing community detection algorithms for text annotation with Wikipedia Entities, establishing the unprecedented concept of community Coherence as a metric for local contextual coherence compatibility. Our experimental evaluation revealed that deeper inference of relatedness and local entity community coherence in the Wikipedia graph bears substantial improvements overall via a focus on accuracy amelioration of less common annotations. The proposed methodology is propitious for wider adoption, attaining robust disambiguation performance.
      Citation: Algorithms
      PubDate: 2020-07-01
      DOI: 10.3390/a13070160
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 161: CONDA-PM—A Systematic Review and
           Framework for Concept Drift Analysis in Process Mining

    • Authors: Ghada Elkhawaga, Mervat Abuelkheir, Sherif I. Barakat, Alaa M. Riad, Manfred Reichert
      First page: 161
      Abstract: Business processes evolve over time to adapt to changing business environments. This requires continuous monitoring of business processes to gain insights into whether they conform to the intended design or deviate from it. The situation when a business process changes while being analysed is denoted as Concept Drift. Its analysis is concerned with studying how a business process changes, in terms of detecting and localising changes and studying the effects of the latter. Concept drift analysis is crucial to enable early detection and management of changes, that is, whether to promote a change to become part of an improved process, or to reject the change and make decisions to mitigate its effects. Despite its importance, there exists no comprehensive framework for analysing concept drift types, affected process perspectives, and granularity levels of a business process. This article proposes the CONcept Drift Analysis in Process Mining (CONcept Drift Analysis in Process Mining Framework. A four-staged framework providing guidance on the fundamental components of a concept drift analysis approach in the context of process mining (CONDA-PM,)) framework describing phases and requirements of a concept drift analysis approach. CONDA-PM was derived from a Systematic Literature Review (Systematic Literature Review. A survey of a topic conducted according to systematic steps and adopts a certain format (SLR,)) of current approaches analysing concept drift. We apply the CONDA-PM framework on current approaches to concept drift analysis and evaluate their maturity. Applying CONDA-PM framework highlights areas where research is needed to complement existing efforts.
      Citation: Algorithms
      PubDate: 2020-07-03
      DOI: 10.3390/a13070161
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 162: Sensitivity Analysis for Microscopic Crowd
           Simulation

    • Authors: Marion Gödel, Rainer Fischer, Gerta Köster
      First page: 162
      Abstract: Microscopic crowd simulation can help to enhance the safety of pedestrians in situations that range from museum visits to music festivals. To obtain a useful prediction, the input parameters must be chosen carefully. In many cases, a lack of knowledge or limited measurement accuracy add uncertainty to the input. In addition, for meaningful parameter studies, we first need to identify the most influential parameters of our parametric computer models. The field of uncertainty quantification offers standardized and fully automatized methods that we believe to be beneficial for pedestrian dynamics. In addition, many methods come at a comparatively low cost, even for computationally expensive problems. This allows for their application to larger scenarios. We aim to identify and adapt fitting methods to microscopic crowd simulation in order to explore their potential in pedestrian dynamics. In this work, we first perform a variance-based sensitivity analysis using Sobol’ indices and then crosscheck the results by a derivative-based measure, the activity scores. We apply both methods to a typical scenario in crowd simulation, a bottleneck. Because constrictions can lead to high crowd densities and delays in evacuations, several experiments and simulation studies have been conducted for this setting. We show qualitative agreement between the results of both methods. Additionally, we identify a one-dimensional subspace in the input parameter space and discuss its impact on the simulation. Moreover, we analyze and interpret the sensitivity indices with respect to the bottleneck scenario.
      Citation: Algorithms
      PubDate: 2020-07-05
      DOI: 10.3390/a13070162
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 163: An Interval Type-2 Fuzzy Risk Analysis
           Model (IT2FRAM) for Determining Construction Project Contingency Reserve

    • Authors: Fateminia, Sumati, Fayek
      First page: 163
      Abstract: Determining contingency reserve is critical to project risk management. Classic methods of determining contingency reserve significantly rely on historical data and fail to effectively incorporate certain types of uncertainties such as vagueness, ambiguity, and subjectivity. In this paper, an interval type-2 fuzzy risk analysis model (IT2FRAM) is introduced in order to determine the contingency reserve. In IT2FRAM, the membership functions for the linguistic terms used to describe the probability, impact of risk and the opportunity events are developed, optimized, and aggregated using interval type-2 fuzzy sets and the principle of justifiable granularity. IT2FRAM is an extension of a fuzzy arithmetic-based risk analysis method which considers such uncertainties and addresses the limitations of probabilistic and deterministic techniques of contingency determination methods. The contribution of IT2FRAM is that it considers the opinions of several subject matter experts to develop the membership functions of linguistic terms. Moreover, the effect of outlier opinions in developing the membership functions of linguistic terms are reduced. IT2FRAM also enables the aggregation of non-linear membership functions into trapezoidal membership functions. A hypothetical case study is presented in order to illustrate the application of IT2FRAM in Fuzzy Risk Analyzer© (FRA©), a risk analysis software.
      Citation: Algorithms
      PubDate: 2020-07-07
      DOI: 10.3390/a13070163
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 164: Nonparametric Estimation of Continuously
           Parametrized Families of Probability Density Functions—Computational
           Aspects

    • Authors: Wojciech Rafajłowicz
      First page: 164
      Abstract: We consider a rather general problem of nonparametric estimation of an uncountable set of probability density functions (p.d.f.’s) of the form: f ( x ; r ) , where r is a non-random real variable and ranges from R 1 to R 2 . We put emphasis on the algorithmic aspects of this problem, since they are crucial for exploratory analysis of big data that are needed for the estimation. A specialized learning algorithm, based on the 2D FFT, is proposed and tested on observations that allow for estimate p.d.f.’s of a jet engine temperatures as a function of its rotation speed. We also derive theoretical results concerning the convergence of the estimation procedure that contains hints on selecting parameters of the estimation algorithm.
      Citation: Algorithms
      PubDate: 2020-07-08
      DOI: 10.3390/a13070164
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 165: Image Edge Detector with Gabor Type
           Filters Using a Spiking Neural Network of Biologically Inspired Neurons

    • Authors: Krishnamurthy V. Vemuru
      First page: 165
      Abstract: We report the design of a Spiking Neural Network (SNN) edge detector with biologically inspired neurons that has a conceptual similarity with both Hodgkin-Huxley (HH) model neurons and Leaky Integrate-and-Fire (LIF) neurons. The computation of the membrane potential, which is used to determine the occurrence or absence of spike events, at each time step, is carried out by using the analytical solution to a simplified version of the HH neuron model. We find that the SNN based edge detector detects more edge pixels in images than those obtained by a Sobel edge detector. We designed a pipeline for image classification with a low-exposure frame simulation layer, SNN edge detection layers as pre-processing layers and a Convolutional Neural Network (CNN) as a classification module. We tested this pipeline for the task of classification with the Digits dataset, which is available in MATLAB. We find that the SNN based edge detection layer increases the image classification accuracy at lower exposure times, that is, for 1 < t < T /4, where t is the number of milliseconds in a simulated exposure frame and T is the total exposure time, with reference to a Sobel edge or Canny edge detection layer in the pipeline. These results pave the way for developing novel cognitive neuromorphic computing architectures for millisecond timescale detection and object classification applications using event or spike cameras.
      Citation: Algorithms
      PubDate: 2020-07-09
      DOI: 10.3390/a13070165
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 131: Unsupervised Text Feature Selection Using
           Memetic Dichotomous Differential Evolution

    • Authors: Ibraheem Al-Jadir, Kok Wai Wong, Chun Che Fung, Hong Xie
      First page: 131
      Abstract: Feature Selection (FS) methods have been studied extensively in the literature, and there are a crucial component in machine learning techniques. However, unsupervised text feature selection has not been well studied in document clustering problems. Feature selection could be modelled as an optimization problem due to the large number of possible solutions that might be valid. In this paper, a memetic method that combines Differential Evolution (DE) with Simulated Annealing (SA) for unsupervised FS was proposed. Due to the use of only two values indicating the existence or absence of the feature, a binary version of differential evolution is used. A dichotomous DE was used for the purpose of the binary version, and the proposed method is named Dichotomous Differential Evolution Simulated Annealing (DDESA). This method uses dichotomous mutation instead of using the standard mutation DE to be more effective for binary purposes. The Mean Absolute Distance (MAD) filter was used as the feature subset internal evaluation measure in this paper. The proposed method was compared with other state-of-the-art methods including the standard DE combined with SA, which is named DESA in this paper, using five benchmark datasets. The F-micro, F-macro (F-scores) and Average Distance of Document to Cluster (ADDC) measures were utilized as the evaluation measures. The Reduction Rate (RR) was also used as an evaluation measure. Test results showed that the proposed DDESA outperformed the other tested methods in performing the unsupervised text feature selection.
      Citation: Algorithms
      PubDate: 2020-05-26
      DOI: 10.3390/a13060131
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 132: Short-Term Wind Speed Forecasting Using
           Statistical and Machine Learning Methods

    • Authors: Lucky O. Daniel, Caston Sigauke, Colin Chibaya, Rendani Mbuvha
      First page: 132
      Abstract: Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed for wind energy generation, using data obtained from the South African Wind Atlas Project. Forecasting is carried out on a two days ahead time horizon. We investigate the predictive performance of artificial neural networks (ANN) trained with Bayesian regularisation, decision trees based stochastic gradient boosting (SGB) and generalised additive models (GAMs). The results of the comparative analysis suggest that ANN displays superior predictive performance based on root mean square error (RMSE). In contrast, SGB shows outperformance in terms of mean average error (MAE) and the related mean average percentage error (MAPE). A further comparison of two forecast combination methods involving the linear and additive quantile regression averaging show the latter forecast combination method as yielding lower prediction accuracy. The additive quantile regression averaging based prediction intervals also show outperformance in terms of validity, reliability, quality and accuracy. Interval combination methods show the median method as better than its pure average counterpart. Point forecasts combination and interval forecasting methods are found to improve forecast performance.
      Citation: Algorithms
      PubDate: 2020-05-26
      DOI: 10.3390/a13060132
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 133: Metric Embedding Learning on
           Multi-Directional Projections

    • Authors: Gábor Kertész
      First page: 133
      Abstract: Image based instance recognition is a difficult problem, in some cases even for the human eye. While latest developments in computer vision—mostly driven by deep learning—have shown that high performance models for classification or categorization can be engineered, the problem of discriminating similar objects with a low number of samples remain challenging. Advances from multi-class classification are applied for object matching problems, as the feature extraction techniques are the same; nature-inspired multi-layered convolutional nets learn the representations, and the output of such a model maps them to a multidimensional encoding space. A metric based loss brings same instance embeddings close to each other. While these solutions achieve high classification performance, low efficiency is caused by memory cost of high parameter number, which is in a relationship with input image size. Upon shrinking the input, the model requires less trainable parameters, while performance decreases. This drawback is tackled by using compressed feature extraction, e.g., projections. In this paper, a multi-directional image projection transformation with fixed vector lengths (MDIPFL) is applied for one-shot recognition tasks, trained on Siamese and Triplet architectures. Results show, that MDIPFL based approach achieves decent performance, despite of the significantly lower number of parameters.
      Citation: Algorithms
      PubDate: 2020-05-29
      DOI: 10.3390/a13060133
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 134: Study of Quasi-Static Magnetization with
           the Random-Field Ising Model

    • Authors: Roman Gozdur
      First page: 134
      Abstract: The topic of this paper is modeling based on Hamiltonian spin interactions. Preliminary studies on the identification of quasi-static magnetizing field in a magnetic system were presented. The random-field Ising model was then used to simulate the simplified ferromagnetic structure. The validation of algorithms and simulation tests were carried out for the 2D and the 3D model spaces containing at least 106 unit cells. The research showed that the response of a slowly driven magnetic system did not depend on the external field sweep rate. Changes in the spatial magnetization of the lattice were very similar below a certain rate of the external field change known as the quasi-static boundary. The observed differences in obtained magnetization curves under quasi-static conditions stemmed from the random nature of the molecular field and the avalanche-like magnetization process
      Citation: Algorithms
      PubDate: 2020-05-29
      DOI: 10.3390/a13060134
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 135: A Recursive Least-Squares Algorithm for
           the Identification of Trilinear Forms

    • Authors: Camelia Elisei-Iliescu, Laura-Maria Dogariu, Constantin Paleologu, Jacob Benesty, Andrei-Alexandru Enescu, Silviu Ciochină
      First page: 135
      Abstract: High-dimensional system identification problems can be efficiently addressed based on tensor decompositions and modelling. In this paper, we design a recursive least-squares (RLS) algorithm tailored for the identification of trilinear forms, namely RLS-TF. In our framework, the trilinear form is related to the decomposition of a third-order tensor (of rank one). The proposed RLS-TF algorithm acts on the individual components of the global impulse response, thus being efficient in terms of both performance and complexity. Simulation results indicate that the proposed solution outperforms the conventional RLS algorithm (which handles only the global impulse response), but also the previously developed trilinear counterparts based on the least-mean- squares algorithm.
      Citation: Algorithms
      PubDate: 2020-06-01
      DOI: 10.3390/a13060135
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 136: Improved Convergence Speed of a DCD-Based
           Algorithm for Sparse Solutions

    • Authors: Zhi Quan, Shuhua Lv
      First page: 136
      Abstract: To solve a system of equations that needs few updates, such as sparse systems, the leading dichotomous coordinate descent (DCD) algorithm is better than the cyclic DCD algorithm because of its fast speed of convergence. In the case of sparse systems requiring a large number of updates, the cyclic DCD algorithm converges faster and has a lower error level than the leading DCD algorithm. However, the leading DCD algorithm has a faster convergence speed in the initial updates. In this paper, we propose a combination of leading and cyclic DCD iterations, the leading-cyclic DCD algorithm, to improve the convergence speed of the cyclic DCD algorithm. The proposed algorithm involves two steps. First, by properly selecting the number of updates of the solution vector used in the leading DCD algorithm, a solution is obtained from the leading DCD algorithm. Second, taking the output of the leading DCD algorithm as the initial values, an improved soft output is generated by the cyclic DCD algorithm with a large number of iterations. Numerical results demonstrate that when the solution sparsity γ is in the interval [ 1 / 8 , 6 / 8 ] , the proposed leading-cyclic DCD algorithm outperforms both the existing cyclic and leading DCD algorithms for all iterations.
      Citation: Algorithms
      PubDate: 2020-06-04
      DOI: 10.3390/a13060136
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 137: Sparse Logistic Regression: Comparison of
           Regularization and Bayesian Implementations

    • Authors: Mattia Zanon, Giuliano Zambonin, Gian Antonio Susto, Seán McLoone
      First page: 137
      Abstract: In knowledge-based systems, besides obtaining good output prediction accuracy, it is crucial to understand the subset of input variables that have most influence on the output, with the goal of gaining deeper insight into the underlying process. These requirements call for logistic model estimation techniques that provide a sparse solution, i.e., where coefficients associated with non-important variables are set to zero. In this work we compare the performance of two methods: the first one is based on the well known Least Absolute Shrinkage and Selection Operator (LASSO) which involves regularization with an ℓ 1 norm; the second one is the Relevance Vector Machine (RVM) which is based on a Bayesian implementation of the linear logistic model. The two methods are extensively compared in this paper, on real and simulated datasets. Results show that, in general, the two approaches are comparable in terms of prediction performance. RVM outperforms the LASSO both in term of structure recovery (estimation of the correct non-zero model coefficients) and prediction accuracy when the dimensionality of the data tends to increase. However, LASSO shows comparable performance to RVM when the dimensionality of the data is much higher than number of samples that is p > > n .
      Citation: Algorithms
      PubDate: 2020-06-08
      DOI: 10.3390/a13060137
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 138: Late Acceptance Hill-Climbing Matheuristic
           for the General Lot Sizing and Scheduling Problem with Rich Constraints

    • Authors: Andreas Goerler , Eduardo Lalla-Ruiz, Stefan Voß 
      First page: 138
      Abstract: This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To address this problem, we propose the Late Acceptance Hill-climbing Matheuristic (LAHCM) as a novel solution framework that exploits and integrates the late acceptance hill climbing algorithm and exact approaches for speeding up the solution process in comparison to solving the problem by means of a general solver. The computational results show the benefits of incorporating exact approaches within the LAHCM template leading to high-quality solutions within short computational times.
      Citation: Algorithms
      PubDate: 2020-06-09
      DOI: 10.3390/a13060138
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 139: Optimization Algorithms for Detection of
           Social Interactions

    • Authors: Vincenzo Cutello, Georgia Fargetta, Mario Pavone, Rocco A. Scollo
      First page: 139
      Abstract: Community detection is one of the most challenging and interesting problems in many research areas. Being able to detect highly linked communities in a network can lead to many benefits, such as understanding relationships between entities or interactions between biological genes, for instance. Two different immunological algorithms have been designed for this problem, called Opt-IA and Hybrid-IA, respectively. The main difference between the two algorithms is the search strategy and related immunological operators developed: the first carries out a random search together with purely stochastic operators; the last one is instead based on a deterministic Local Search that tries to refine and improve the current solutions discovered. The robustness of Opt-IA and Hybrid-IA has been assessed on several real social networks. These same networks have also been considered for comparing both algorithms with other seven different metaheuristics and the well-known greedy optimization Louvain algorithm. The experimental analysis conducted proves that Opt-IA and Hybrid-IA are reliable optimization methods for community detection, outperforming all compared algorithms.
      Citation: Algorithms
      PubDate: 2020-06-11
      DOI: 10.3390/a13060139
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 140: Special Issue on Ensemble Learning and
           Applications

    • Authors: Panagiotis Pintelas, Ioannis E. Livieris
      First page: 140
      Abstract: During the last decades, in the area of machine learning and data mining, the development of ensemble methods has gained a significant attention from the scientific community. Machine learning ensemble methods combine multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Combining multiple learning models has been theoretically and experimentally shown to provide significantly better performance than their single base learners. In the literature, ensemble learning algorithms constitute a dominant and state-of-the-art approach for obtaining maximum performance, thus they have been applied in a variety of real-world problems ranging from face and emotion recognition through text classification and medical diagnosis to financial forecasting.
      Citation: Algorithms
      PubDate: 2020-06-11
      DOI: 10.3390/a13060140
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 141: Dynamic Ring Exploration with (H,S) View

    • Authors: Tsuyoshi Gotoh, Yuichi Sudo, Fukuhito Ooshita, Toshimitsu Masuzawa
      First page: 141
      Abstract: The researches about a mobile entity (called agent) on dynamic networks have attracted a lot of attention in recent years. Exploration which requires an agent to visit all the nodes in the network is one of the most fundamental problems. While the exploration of dynamic networks with complete information or with no information about network changes has been studied, an agent with partial information about the network changes has not been considered yet despite its practical importance. In this paper, we consider the exploration of dynamic networks by a single agent with partial information about network changes. To the best of our knowledge, this is the very first work to investigate the exploration problem with such partial information. As a first step in this research direction, we focus on 1-interval connected rings as dynamic networks in this paper. We assume that the single agent has partial information called the ( H , S ) view by which it always knows whether or not each of the links within H hops is available in each of the next S time steps. In this setting, we show that H + S ≥ n and S ≥ ⌈ n / 2 ⌉ (n is the size of the network) are necessary and sufficient conditions to explore 1-interval connected rings. Moreover, we investigate the upper and lower bounds of the exploration time. It is proven that the exploration time is O ( n 2 ) for ⌈ n / 2 ⌉ ≤ S < 2 H ′ − 1 , O ( n 2 / H + n H ) for S ≥ max ( ⌈ n / 2 ⌉ , 2 H ′ − 1 ) , O ( n 2 / H + n log H ) for S ≥ n − 1 , and Ω ( n 2 / H ) for any S where H ′ = min ( H , ⌊ n / 2 ⌋ ) .
      Citation: Algorithms
      PubDate: 2020-06-12
      DOI: 10.3390/a13060141
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 142: Parallelized Swarm Intelligence Approach
           for Solving TSP and JSSP Problems

    • Authors: Jedrzejowicz, Wierzbowska
      First page: 142
      Abstract: One of the possible approaches to solving difficult optimization problems is applying population-based metaheuristics. Among such metaheuristics, there is a special class where searching for the best solution is based on the collective behavior of decentralized, self-organized agents. This study proposes an approach in which a swarm of agents tries to improve solutions from the population of solutions. The process is carried out in parallel threads. The proposed algorithm—based on the mushroom-picking metaphor—was implemented using Scala in an Apache Spark environment. An extended computational experiment shows how introducing a combination of simple optimization agents and increasing the number of threads may improve the results obtained by the model in the case of TSP and JSSP problems.
      Citation: Algorithms
      PubDate: 2020-06-12
      DOI: 10.3390/a13060142
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 143: Numerically Efficient Fuzzy MPC Algorithm
           with Advanced Generation of Prediction—Application to a Chemical Reactor
           

    • Authors: Piotr M. Marusak
      First page: 143
      Abstract: In Model Predictive Control (MPC) algorithms, control signals are generated after solving optimization problems. If the model used for prediction is linear then the optimization problem is a standard, easy to solve, quadratic programming problem with linear constraints. However, such an algorithm may offer insufficient performance if applied to a nonlinear control plant. On the other hand, if a model used for prediction is nonlinear, then non–convex optimization problem must be solved at each algorithm iteration. Then the numerical problems may occur during solving it and the time needed to calculate the control signals cannot be determined. Therefore approaches based on linearized models are preferred in practical applications. A fuzzy algorithm with an advanced generation of the prediction is proposed in the article. The prediction is obtained in such a way that the algorithm is formulated as a quadratic optimization problem but offers performance very close to that of the MPC algorithm with nonlinear optimization. The efficiency of the proposed approach is demonstrated in the control system of a nonlinear chemical control plant—a CSTR (Continuous Stirred–Tank Reactor) with van de Vusse reaction.
      Citation: Algorithms
      PubDate: 2020-06-14
      DOI: 10.3390/a13060143
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 144: Efficient Probabilistic Joint Inversion of
           Direct Current Resistivity and Small-Loop Electromagnetic Data

    • Authors: Christin Bobe, Daan Hanssens, Thomas Hermans, Ellen Van De Vijver
      First page: 144
      Abstract: Often, multiple geophysical measurements are sensitive to the same subsurface parameters. In this case, joint inversions are mostly preferred over two (or more) separate inversions of the geophysical data sets due to the expected reduction of the non-uniqueness in the joint inverse solution. This reduction can be quantified using Bayesian inversions. However, standard Markov chain Monte Carlo (MCMC) approaches are computationally expensive for most geophysical inverse problems. We present the Kalman ensemble generator (KEG) method as an efficient alternative to the standard MCMC inversion approaches. As proof of concept, we provide two synthetic studies of joint inversion of frequency domain electromagnetic (FDEM) and direct current (DC) resistivity data for a parameter model with vertical variation in electrical conductivity. For both studies, joint results show a considerable improvement for the joint framework over the separate inversions. This improvement consists of (1) an uncertainty reduction in the posterior probability density function and (2) an ensemble mean that is closer to the synthetic true electrical conductivities. Finally, we apply the KEG joint inversion to FDEM and DC resistivity field data. Joint field data inversions improve in the same way seen for the synthetic studies.
      Citation: Algorithms
      PubDate: 2020-06-18
      DOI: 10.3390/a13060144
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 145: Binary Time Series Classification with
           Bayesian Convolutional Neural Networks When Monitoring for Marine Gas
           Discharges

    • Authors: Kristian Gundersen, Guttorm Alendal, Anna Oleynik, Nello Blaser
      First page: 145
      Abstract: The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large areas will need to be covered with limited resources. To increase the detectability of marine gas seepage we propose a deep probabilistic learning algorithm, a Bayesian Convolutional Neural Network (BCNN), to classify time series of measurements. The BCNN will classify time series to belong to a leak/no-leak situation, including classification uncertainty. The latter is important for decision makers who must decide to initiate costly confirmation surveys and, hence, would like to avoid false positives. Results from a transport model are used for the learning process of the BCNN and the task is to distinguish the signal from a leak hidden within the natural variability. We show that the BCNN classifies time series arising from leaks with high accuracy and estimates its associated uncertainty. We combine the output of the BCNN model, the posterior predictive distribution, with a Bayesian decision rule showcasing how the framework can be used in practice to make optimal decisions based on a given cost function.
      Citation: Algorithms
      PubDate: 2020-06-19
      DOI: 10.3390/a13060145
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 146: A Survey on Approximation in Parameterized
           Complexity: Hardness and Algorithms

    • Authors: Andreas Emil Feldmann, Karthik C. S., Euiwoong Lee, Pasin Manurangsi
      First page: 146
      Abstract: Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions.
      Citation: Algorithms
      PubDate: 2020-06-19
      DOI: 10.3390/a13060146
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 147: Local Comparison between Two Ninth
           Convergence Order Algorithms for Equations

    • Authors: Samundra Regmi, Ioannis K. Argyros, Santhosh George
      First page: 147
      Abstract: A local convergence comparison is presented between two ninth order algorithms for solving nonlinear equations. In earlier studies derivatives not appearing on the algorithms up to the 10th order were utilized to show convergence. Moreover, no error estimates, radius of convergence or results on the uniqueness of the solution that can be computed were given. The novelty of our study is that we address all these concerns by using only the first derivative which actually appears on these algorithms. That is how to extend the applicability of these algorithms. Our technique provides a direct comparison between these algorithms under the same set of convergence criteria. This technique can be used on other algorithms. Numerical experiments are utilized to test the convergence criteria.
      Citation: Algorithms
      PubDate: 2020-06-20
      DOI: 10.3390/a13060147
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 148: An Application of a Modified Gappy Proper
           Orthogonal Decomposition on Complexity Reduction of Allen-Cahn Equation

    • Authors: Chutipong Dechanubeksa, Saifon Chaturantabut
      First page: 148
      Abstract: This work considers model reduction techniques that can substantially decrease computational cost in simulating parmetrized Allen–Cahn equation. We first employ the proper orthogonal decomposition (POD) approach to reduce the number of unknowns in the full-order discretized system. Since POD cannot reduce the computational complexity of nonlinearity in Allen–Cahn equation, we also apply discrete empirical interpolation method (DEIM) to approximate the nonlinear term for a substantial reduction in overall simulation time. However, in general, the POD-DEIM approach is less accurate than the POD approach, since it further approximates the nonlinear term. To increase the accuracy of the POD-DEIM approach, this work introduces an extension of the DEIM approximation based on the concept of Gappy POD (GPOD), which is optimal in the least-squares sense. The POD-GPOD approach is tested and compared with the POD and POD-DEIM approaches on Allen–Cahn equation for both cases of fixed parameter value and varying parameter values. The modified GPOD approximation introduced in this work is demonstrated to improve accuracy of DEIM without sacrificing too much efficiency on the computational speedup, e.g., in one of our numerical tests, the POD-GPOD approach provides an approximate solution to the parmetrized Allen–Cahn equation 200 times faster than the full-order system with average error of order O ( 10 − 4 ) . The POD-GPOD approach is therefore shown to be a promising technique that compromises between the accuracy of POD approach and the efficiency of POD-DEIM approach.
      Citation: Algorithms
      PubDate: 2020-06-22
      DOI: 10.3390/a13060148
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 149: A Distributed Approach to the Evasion
           Problem

    • Authors: Denis Khryashchev, Jie Chu, Mikael Vejdemo-Johansson, Ping Ji
      First page: 149
      Abstract: The Evasion Problem is the question of whether—given a collection of sensors and a particular movement pattern over time—it is possible to stay undetected within the domain over the same stretch of time. It has been studied using topological techniques since 2006—with sufficient conditions for non-existence of an Evasion Path provided by de Silva and Ghrist; sufficient and necessary conditions with extended sensor capabilities provided by Adams and Carlsson; and sufficient and necessary conditions using sheaf theory by Krishnan and Ghrist. In this paper, we propose three algorithms for the Evasion Problem: one distributed algorithm extension of Adams’ approach for evasion path detection, and two different approaches to evasion path enumeration.
      Citation: Algorithms
      PubDate: 2020-06-23
      DOI: 10.3390/a13060149
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 150: Fibers of Failure: Classifying Errors in
           Predictive Processes

    • Authors: Leo S. Carlsson, Mikael Vejdemo-Johansson, Gunnar Carlsson, Pär G. Jönsson
      First page: 150
      Abstract: Predictive models are used in many different fields of science and engineering and are always prone to make faulty predictions. These faulty predictions can be more or less malignant depending on the model application. We describe fibers of failure (FImathsizesmallFAmathsizesmall), a method to classify failure modes of predictive processes. Our method uses MAPPERmathsizesmall, an algorithm from topological data analysis (TDA), to build a graphical model of input data stratified by prediction errors. We demonstrate two ways to use the failure mode groupings: either to produce a correction layer that adjusts predictions by similarity to the failure modes; or to inspect members of the failure modes to illustrate and investigate what characterizes each failure mode. We demonstrate FImathsizesmallFAmathsizesmall on two scenarios: a convolutional neural network (CNN) predicting MNIST images with added noise, and an artificial neural network (ANN) predicting the electrical energy consumption of an electric arc furnace (EAF). The correction layer on the CNN model improved its prediction accuracy significantly while the inspection of failure modes for the EAF model provided guiding insights into the domain-specific reasons behind several high-error regions.
      Citation: Algorithms
      PubDate: 2020-06-23
      DOI: 10.3390/a13060150
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 151: Compression of Next-Generation Sequencing
           Data and of DNA Digital Files †

    • Authors: Carpentieri
      First page: 151
      Abstract: The increase in memory and in network traffic used and caused by new sequenced biological data has recently deeply grown. Genomic projects such as HapMap and 1000 Genomes have contributed to the very large rise of databases and network traffic related to genomic data and to the development of new efficient technologies. The large-scale sequencing of samples of DNA has brought new attention and produced new research, and thus the interest in the scientific community for genomic data has greatly increased. In a very short time, researchers have developed hardware tools, analysis software, algorithms, private databases, and infrastructures to support the research in genomics. In this paper, we analyze different approaches for compressing digital files generated by Next-Generation Sequencing tools containing nucleotide sequences, and we discuss and evaluate the compression performance of generic compression algorithms by confronting them with a specific system designed by Jones et al. specifically for genomic file compression: Quip. Moreover, we present a simple but effective technique for the compression of DNA sequences in which we only consider the relevant DNA data and experimentally evaluate its performances.
      Citation: Algorithms
      PubDate: 2020-06-24
      DOI: 10.3390/a13060151
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 152: DS Evidence Theory-Based Energy Balanced
           Routing Algorithm for Network Lifetime Enhancement in WSN-Assisted IOT

    • Authors: Tang, Lu
      First page: 152
      Abstract: Wireless sensor networks (WSNs) can provide data acquisition for long-term environment monitoring, which are important parts of Internet of Things (IoT). In the WSN-assisted IoT, energy efficient routing algorithms are required to maintain a long network lifetime. In this paper, a DS evidence theory-based energy balanced routing algorithm for network lifetime enhancement (EBRA-NLE) in WSN-assisted IOT is proposed. From the perspective of energy balance and minimization of routing path energy consumption, three attribute indexes are established to evaluate the forward neighboring nodes. Then a route selection method based on DS evidence theory is developed to comprehensively evaluate the nodes and select the optimal next hop. In order to avoid missing the ideal solution because of the excessive difference between the index values, the sine function is used to adjust this difference. The simulation results show that the proposed EBRA-NLE has certain advantages in prolonging network lifetime and balancing energy between nodes.
      Citation: Algorithms
      PubDate: 2020-06-24
      DOI: 10.3390/a13060152
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 153: Novel Graph Model for Solving
           Collision-Free Multiple-Vehicle Traveling Salesman Problem Using Ant
           Colony Optimization

    • Authors: Pamosoaji, Setyohadi
      First page: 153
      Abstract: In this paper, a novel graph model to figure Collision-Free Multiple Traveling Salesman Problem (CFMTSP) is proposed. In this problem, a group of vehicles start from different nodes in an undirected graph and must visit each node in the graph, following the well-known Traveling Salesman Problem (TSP) fashion without any collision. This paper’s main objective is to obtain free-collision routes for each vehicle while minimizing the traveling time of the slowest vehicle. This problem can be approached by applying speed to each vehicle, and a novel augmented graph model can perform it. This approach accommodates not only the position of nodes and inter-node distances, but also the speed of all the vehicles is proposed. The proposed augmented graph should be able to be used to perform optimal trajectories, i.e., routes and speeds, for all vehicles. An ant colony optimization (ACO) algorithm is used on the proposed augmented graph. Simulations show that the algorithm can satisfy the main objective. Considered factors, such as limitation of the mission successfulness, i.e., the inter-vehicle arrival time on a node, the number of vehicles, and the numbers of vehicles and edges of the graph are also discussed.
      Citation: Algorithms
      PubDate: 2020-06-26
      DOI: 10.3390/a13060153
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 154: An Algorithm for Fuzzy Negations
           Based-Intuitionistic Fuzzy Copula Aggregation Operators in Multiple
           Attribute Decision Making

    • Authors: Giakoumakis, Papadopoulos
      First page: 154
      Abstract: In this paper, we develop a novel computation model of Intuitionistic Fuzzy Values with the usage of fuzzy negations and Archimedean copulas. This novel computation model’s structure is based on the extension of the existing operations of intuitionistic fuzzy values with some classes of fuzzy negations. Many properties of the proposed operations are investigated and proved. Additionally, in this paper we introduce the concepts of intuitionistic fuzzy Archimedean copula weighted arithmetic and geometric aggregation operators based on fuzzy negations, including a further analysis of their properties. Finally, using a case study from an already published paper we found that our method has many advantages.
      Citation: Algorithms
      PubDate: 2020-06-26
      DOI: 10.3390/a13060154
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 106: Diagnosis in Tennis Serving Technique

    • Authors: Eugenio Roanes-Lozano, Eduardo A. Casella, Fernando Sánchez, Antonio Hernando
      First page: 106
      Abstract: Tennis is a sport with a very complex technique. Amateur tennis players have trainers and/or coaches, but are not usually accompanied by them to championships. Curiously, in this sport, the result of many matches can be changed by a small hint like `hit the ball a little higher when serving’. However, the biomechanical of a tennis stroke is only clear to an expert. We, therefore, developed a prototype of a rule-based expert system (RBES) aimed at an amateur competition player that is not accompanied by his/her coach to a championship and is not serving as usual (the RBES is so far restricted to serving). The player has to answer a set of questions about how he/she is serving that day and his/her usual serving technique and the RBES obtains a diagnosis using logic inference about the possible reasons (according of the logic rules that have been previously given to the RBES). A certain knowledge of the tennis terminology and technique is required from the player, but that is something known at this level. The underlying logic is Boolean and the inference engine is algebraic (it uses Groebner bases).
      Citation: Algorithms
      PubDate: 2020-04-25
      DOI: 10.3390/a13050106
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 107: How to Inspect and Measure Data Quality
           about Scientific Publications: Use Case of Wikipedia and CRIS Databases

    • Authors: Otmane Azeroual, Włodzimierz Lewoniewski
      First page: 107
      Abstract: The quality assurance of publication data in collaborative knowledge bases and in current research information systems (CRIS) becomes more and more relevant by the use of freely available spatial information in different application scenarios. When integrating this data into CRIS, it is necessary to be able to recognize and assess their quality. Only then is it possible to compile a result from the available data that fulfills its purpose for the user, namely to deliver reliable data and information. This paper discussed the quality problems of source metadata in Wikipedia and CRIS. Based on real data from over 40 million Wikipedia articles in various languages, we performed preliminary quality analysis of the metadata of scientific publications using a data quality tool. So far, no data quality measurements have been programmed with Python to assess the quality of metadata from scientific publications in Wikipedia and CRIS. With this in mind, we programmed the methods and algorithms as code, but presented it in the form of pseudocode in this paper to measure the quality related to objective data quality dimensions such as completeness, correctness, consistency, and timeliness. This was prepared as a macro service so that the users can use the measurement results with the program code to make a statement about their scientific publications metadata so that the management can rely on high-quality data when making decisions.
      Citation: Algorithms
      PubDate: 2020-04-26
      DOI: 10.3390/a13050107
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 108: Investigation of the iCC Framework
           Performance for Solving Constrained LSGO Problems

    • Authors: Alexey Vakhnin, Evgenii Sopov
      First page: 108
      Abstract: Many modern real-valued optimization tasks use “black-box” (BB) models for evaluating objective functions and they are high-dimensional and constrained. Using common classifications, we can identify them as constrained large-scale global optimization (cLSGO) tasks. Today, the IEEE Congress of Evolutionary Computation provides a special session and several benchmarks for LSGO. At the same time, cLSGO problems are not well studied yet. The majority of modern optimization techniques demonstrate insufficient performance when confronted with cLSGO tasks. The effectiveness of evolution algorithms (EAs) in solving constrained low-dimensional optimization problems has been proven in many scientific papers and studies. Moreover, the cooperative coevolution (CC) framework has been successfully applied for EA used to solve LSGO problems. In this paper, a new approach for solving cLSGO has been proposed. This approach is based on CC and a method that increases the size of groups of variables at the decomposition stage (iCC) when solving cLSGO tasks. A new algorithm has been proposed, which combined the success-history based parameter adaptation for differential evolution (SHADE) optimizer, iCC, and the ε-constrained method (namely ε-iCC-SHADE). We investigated the performance of the ε-iCC-SHADE and compared it with the previously proposed ε-CC-SHADE algorithm on scalable problems from the IEEE CEC 2017 Competition on constrained real-parameter optimization.
      Citation: Algorithms
      PubDate: 2020-04-26
      DOI: 10.3390/a13050108
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 109: Evolution of SOMs’ Structure and
           Learning Algorithm: From Visualization of High-Dimensional Data to
           Clustering of Complex Data

    • Authors: Marian B. Gorzałczany, Filip Rudziński
      First page: 109
      Abstract: In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are discussed. They are characterized in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets. We also show that the proposed original solutions, i.e., GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform alternative approaches in various complex clustering tasks by providing up to 20 % increase in the clustering accuracy. The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM’s generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated.
      Citation: Algorithms
      PubDate: 2020-04-28
      DOI: 10.3390/a13050109
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 110: p-Refined Multilevel Quasi-Monte Carlo for
           Galerkin Finite Element Methods with Applications in Civil Engineering

    • Authors: Philippe Blondeel, Pieterjan Robbe, Cédric Van hoorickx, Stijn François, Geert Lombaert, Stefan Vandewalle
      First page: 110
      Abstract: Civil engineering applications are often characterized by a large uncertainty on the material parameters. Discretization of the underlying equations is typically done by means of the Galerkin Finite Element method. The uncertain material parameter can be expressed as a random field represented by, for example, a Karhunen–Loève expansion. Computation of the stochastic responses, i.e., the expected value and variance of a chosen quantity of interest, remains very costly, even when state-of-the-art Multilevel Monte Carlo (MLMC) is used. A significant cost reduction can be achieved by using a recently developed multilevel method: p-refined Multilevel Quasi-Monte Carlo (p-MLQMC). This method is based on the idea of variance reduction by employing a hierarchical discretization of the problem based on a p-refinement scheme. It is combined with a rank-1 Quasi-Monte Carlo (QMC) lattice rule, which yields faster convergence compared to the use of random Monte Carlo points. In this work, we developed algorithms for the p-MLQMC method for two dimensional problems. The p-MLQMC method is first benchmarked on an academic beam problem. Finally, we use our algorithm for the assessment of the stability of slopes, a problem that arises in geotechnical engineering, and typically suffers from large parameter uncertainty. For both considered problems, we observe a very significant reduction in the amount of computational work with respect to MLMC.
      Citation: Algorithms
      PubDate: 2020-04-28
      DOI: 10.3390/a13050110
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 111: Multi-Level Joint Feature Learning for
           Person Re-Identification

    • Authors: Shaojun Wu, Ling Gao
      First page: 111
      Abstract: In person re-identification, extracting image features is an important step when retrieving pedestrian images. Most of the current methods only extract global features or local features of pedestrian images. Some inconspicuous details are easily ignored when learning image features, which is not efficient or robust to for scenarios with large differences. In this paper, we propose a Multi-level Feature Fusion model that combines both global features and local features of images through deep learning networks to generate more discriminative pedestrian descriptors. Specifically, we extract local features from different depths of network by the Part-based Multi-level Net to fuse low-to-high level local features of pedestrian images. Global-Local Branches are used to extract the local features and global features at the highest level. The experiments have proved that our deep learning model based on multi-level feature fusion works well in person re-identification. The overall results outperform the state of the art with considerable margins on three widely-used datasets. For instance, we achieve 96% Rank-1 accuracy on the Market-1501 dataset and 76.1% mAP on the DukeMTMC-reID dataset, outperforming the existing works by a large margin (more than 6%).
      Citation: Algorithms
      PubDate: 2020-04-29
      DOI: 10.3390/a13050111
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 112: Two NEH Heuristic Improvements for
           Flowshop Scheduling Problem with Makespan Criterion

    • Authors: Christophe Sauvey, Nathalie Sauer
      First page: 112
      Abstract: Since its creation by Nawaz, Enscore, and Ham in 1983, NEH remains the best heuristic method to solve flowshop scheduling problems. In the large body of literature dealing with the application of this heuristic, it can be clearly noted that results differ from one paper to another. In this paper, two methods are proposed to improve the original NEH, based on the two points in the method where choices must be made, in case of equivalence between two job orders or partial sequences. When an equality occurs in a sorting method, two results are equivalent, but can lead to different final results. In order to propose the first improvement to NEH, the factorial basis decomposition method is introduced, which makes a number computationally correspond to a permutation. This method is very helpful for the first improvement, and allows testing of all the sequencing possibilities for problems counting up to 50 jobs. The second improvement is located where NEH keeps the best partial sequence. Similarly, a list of equivalent partial sequences is kept, rather than only one, to provide the global method a chance of better performance. The results obtained with the successive use of the two methods of improvement present an average improvement of 19% over the already effective results of the original NEH method.
      Citation: Algorithms
      PubDate: 2020-04-29
      DOI: 10.3390/a13050112
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 113: Goal Oriented Time Adaptivity Using Local
           Error Estimates

    • Authors: Peter Meisrimel, Philipp Birken
      First page: 113
      Abstract: We consider initial value problems (IVPs) where we are interested in a quantity of interest (QoI) that is the integral in time of a functional of the solution. For these, we analyze goal oriented time adaptive methods that use only local error estimates. A local error estimate and timestep controller for step-wise contributions to the QoI are derived. We prove convergence of the error in the QoI for tolerance to zero under a controllability assumption. By analyzing global error propagation with respect to the QoI, we can identify possible issues and make performance predictions. Numerical tests verify these results. We compare performance with classical local error based time-adaptivity and a posteriori based adaptivity using the dual-weighted residual (DWR) method. For dissipative problems, local error based methods show better performance than DWR and the goal oriented method shows good results in most examples, with significant speedups in some cases.
      Citation: Algorithms
      PubDate: 2020-04-30
      DOI: 10.3390/a13050113
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 114: Automobile Fine-Grained Detection
           Algorithm Based on Multi-Improved YOLOv3 in Smart Streetlights

    • Authors: Yang, Yang, He, Fu, Jiang
      First page: 114
      Abstract: Upgrading ordinary streetlights to smart streetlights to help monitor traffic flow is a low-cost and pragmatic option for cities. Fine-grained classification of vehicles in the sight of smart streetlights is essential for intelligent transportation and smart cities. In order to improve the classification accuracy of distant cars, we propose a reformed YOLOv3 (You Only Look Once, version 3) algorithm to realize the detection of various types of automobiles, such as SUVs, sedans, taxis, commercial vehicles, small commercial vehicles, vans, buses, trucks and pickup trucks. Based on the dataset UA-DETRAC-LITE, manually labeled data is added to improve the data balance. First, data optimization for the vehicle target is performed to improve the generalization ability and position regression loss function of the model. The experimental results show that, within the range of 67 m, and through scale optimization (i.e., by introducing multi-scale training and anchor clustering), the classification accuracies of trucks and pickup trucks are raised by 26.98% and 16.54%, respectively, and the overall accuracy is increased by 8%. Secondly, label smoothing and mixup optimization is also performed to improve the generalization ability of the model. Compared with the original YOLO algorithm, the accuracy of the proposed algorithm is improved by 16.01%. By combining the optimization of the position regression loss function of GIOU (Generalized Intersection Over Union), the overall system accuracy can reach 92.7%, which improves the performance by 21.28% compared with the original YOLOv3 algorithm.
      Citation: Algorithms
      PubDate: 2020-05-02
      DOI: 10.3390/a13050114
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 115: A Fuzzy-Based Decision Support Model for
           Risk Maturity Evaluation of Construction Organizations

    • Authors: Roghabadi, Moselhi
      First page: 115
      Abstract: Risk maturity evaluation is an efficient tool which can assist construction organizations in the identification of their strengths and weaknesses in risk management processes and in taking necessary actions for the improvement of these processes. The accuracy of its results relies heavily on the quality of responses provided by participants specialized in these processes across the organization. Risk maturity models reported in the literature gave equal importance to participants’ responses during the model development, neglecting their level of authority in the organization as well as their level of expertise in risk management processes. Unlike the existing models, this paper presents a new risk maturity model that considers the relative importance of the responses provided by the participants in the model development. It considered their authority in the organization and their level of involvement in the risk management processes for calculating the relative weights associated with the risk maturity attributes. It employed an analytic network process (ANP) to model the interdependencies among the risk maturity attributes and utilizes the fuzzy set theory to incorporate the uncertainty associated with the ambiguity of the responses used in the model development. The developed model allows the construction organizations to have a more accurate and realistic view of their current performance in risk management processes. The application of the developed model was investigated by measuring the risk maturity level of an industrial partner working on civil infrastructure projects in Canada.
      Citation: Algorithms
      PubDate: 2020-05-02
      DOI: 10.3390/a13050115
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 116: The Expected Utility Insurance Premium
           Principle with Fourth-Order Statistics: Does It Make a Difference'

    • Authors: Alessandro Mazzoccoli, Maurizio Naldi
      First page: 116
      Abstract: The expected utility principle is often used to compute the insurance premium through a second-order approximation of the expected value of the utility of losses. We investigate the impact of using a more accurate approximation based on the fourth-order statistics of the expected loss and derive the premium under this expectedly more accurate approximation. The comparison between the two approximation levels shows that the second-order-based premium is always lower (i.e., an underestimate of the correct one) for the commonest loss distributions encountered in insurance. The comparison is also carried out for real cases, considering the loss parameters values estimated in the literature. The increased risk of the insurer is assessed through the Value-at-Risk.
      Citation: Algorithms
      PubDate: 2020-05-06
      DOI: 10.3390/a13050116
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 117: A Novel Hybrid Metaheuristic Algorithm for
           Optimization of Construction Management Site Layout Planning

    • Authors: Prayogo, Cheng, Wu, Redi, Yu, Persada, Nadlifatin
      First page: 117
      Abstract: Symbiotic organisms search (SOS) is a promising metaheuristic algorithm that has been studied recently by numerous researchers due to its capability to solve various hard and complex optimization problems. SOS is a powerful optimization technique that mimics the simulation of the typical symbiotic interactions among organisms in an ecosystem. This study presents a new SOS-based hybrid algorithm for solving the challenging construction site layout planning (CSLP) discrete problems. A new algorithm called the hybrid symbiotic organisms search with local operators (HSOS-LO) represents a combination of the canonical SOS and several local search mechanisms aimed at increasing the searching capability in discrete-based solution space. In this study, three CSLP problems that consist of single and multi-floor facility layout problems are tested, and the obtained results were compared with other widely used metaheuristic algorithms. The results indicate the robust performance of the HSOS-LO algorithm in handling discrete-based CSLP problems.
      Citation: Algorithms
      PubDate: 2020-05-06
      DOI: 10.3390/a13050117
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 118: Distributional Reinforcement Learning with
           Ensembles

    • Authors: Björn Lindenberg Jonas Nordqvist, Karl-Olof Lindahl
      First page: 118
      Abstract: It is well known that ensemble methods often provide enhanced performance in reinforcement learning. In this paper, we explore this concept further by using group-aided training within the distributional reinforcement learning paradigm. Specifically, we propose an extension to categorical reinforcement learning, where distributional learning targets are implicitly based on the total information gathered by an ensemble. We empirically show that this may lead to much more robust initial learning, a stronger individual performance level, and good efficiency on a per-sample basis.
      Citation: Algorithms
      PubDate: 2020-05-07
      DOI: 10.3390/a13050118
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 119: Forecasting Electricity Prices: a Machine
           Learning Approach

    • Authors: Castelli, Groznik, Popovič
      First page: 119
      Abstract: The electricity market is a complex, evolutionary, and dynamic environment. Forecasting electricity prices is an important issue for all electricity market participants. In this study, we shed light on how to improve electricity price forecasting accuracy through the use of a machine learning technique—namely, a novel genetic programming approach. Drawing on empirical data from the largest EU energy markets, we propose a forecasting model that considers variables related to weather conditions, oil prices, and CO2 coupons and predicts energy prices 24 hours ahead. We show that the proposed model provides more accurate predictions of future electricity prices than existing prediction methods. Our important findings will assist the electricity market participants in forecasting future price movements.
      Citation: Algorithms
      PubDate: 2020-05-08
      DOI: 10.3390/a13050119
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 120: A Novel Data-Driven Magnetic Resonance
           Spectroscopy Signal Analysis Framework to Quantify Metabolite
           Concentration

    • Authors: Omid Bazgir, Eric Walden, Brian Nutter, Sunanda Mitra
      First page: 120
      Abstract: Developing tools for precise quantification of brain metabolites using magnetic resonance spectroscopy (MRS) is an active area of research with broad application in non-invasive neurodegenerative disease studies. The tools are mainly developed based on black box (data-driven), or basis sets approaches. In this study, we offer a multi-stage framework that integrates data-driven and basis sets methods. We first use truncated Hankel singular value decomposition (HSVD) to decompose free induction decay (FID) signals into single tone FIDs, as the data-driven stage. Subsequently, single tone FIDs are clustered into basis sets while using initialized K-means with prior knowledge of the metabolites, as the basis set stage. The generated basis sets are fitted with the magnetic resonance (MR) spectra while using a linear constrained least square, and then the metabolite concentration is calculated. Prior to using our proposed multi-stage approach, a sequence of preprocessing blocks: water peak removal, phase correction, and baseline correction (developed in house) are used.
      Citation: Algorithms
      PubDate: 2020-05-10
      DOI: 10.3390/a13050120
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 121: Ensemble Deep Learning Models for
           Forecasting Cryptocurrency Time-Series

    • Authors: Ioannis E. Livieris, Emmanuel Pintelas, Stavros Stavroyiannis, Panagiotis Pintelas
      First page: 121
      Abstract: Nowadays, cryptocurrency has infiltrated almost all financial transactions; thus, it is generally recognized as an alternative method for paying and exchanging currency. Cryptocurrency trade constitutes a constantly increasing financial market and a promising type of profitable investment; however, it is characterized by high volatility and strong fluctuations of prices over time. Therefore, the development of an intelligent forecasting model is considered essential for portfolio optimization and decision making. The main contribution of this research is the combination of three of the most widely employed ensemble learning strategies: ensemble-averaging, bagging and stacking with advanced deep learning models for forecasting major cryptocurrency hourly prices. The proposed ensemble models were evaluated utilizing state-of-the-art deep learning models as component learners, which were comprised by combinations of long short-term memory (LSTM), Bi-directional LSTM and convolutional layers. The ensemble models were evaluated on prediction of the cryptocurrency price on the following hour (regression) and also on the prediction if the price on the following hour will increase or decrease with respect to the current price (classification). Additionally, the reliability of each forecasting model and the efficiency of its predictions is evaluated by examining for autocorrelation of the errors. Our detailed experimental analysis indicates that ensemble learning and deep learning can be efficiently beneficial to each other, for developing strong, stable, and reliable forecasting models.
      Citation: Algorithms
      PubDate: 2020-05-10
      DOI: 10.3390/a13050121
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 122: Incremental FPT Delay

    • Authors: Arne Meier
      First page: 122
      Abstract: In this paper, we study the relationship of parameterized enumeration complexity classes defined by Creignou et al. (MFCS 2013). Specifically, we introduce two hierarchies (IncFPTa and CapIncFPTa) of enumeration complexity classes for incremental fpt-time in terms of exponent slices and show how they interleave. Furthermore, we define several parameterized function classes and, in particular, introduce the parameterized counterpart of the class of nondeterministic multivalued functions with values that are polynomially verifiable and guaranteed to exist, TFNP, known from Megiddo and Papadimitriou (TCS 1991). We show that this class TF(para-NP), the restriction of the function variant of NP to total functions, collapsing to F(FPT), the function variant of FPT, is equivalent to the result that OutputFPT coincides with IncFPT. In addition, these collapses are shown to be equivalent to TFNP = FP, and also equivalent to P equals NP intersected with coNP. Finally, we show that these two collapses are equivalent to the collapse of IncP and OutputP in the classical setting. These results are the first direct connections of collapses in parameterized enumeration complexity to collapses in classical enumeration complexity, parameterized function complexity, classical function complexity, and computational complexity theory.
      Citation: Algorithms
      PubDate: 2020-05-15
      DOI: 10.3390/a13050122
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 123: Mining Sequential Patterns with
           VC-Dimension and Rademacher Complexity

    • Authors: Diego Santoro, Andrea Tonon, Fabio Vandin
      First page: 123
      Abstract: Sequential pattern mining is a fundamental data mining task with application in several domains. We study two variants of this task—the first is the extraction of frequent sequential patterns, whose frequency in a dataset of sequential transactions is higher than a user-provided threshold; the second is the mining of true frequent sequential patterns, which appear with probability above a user-defined threshold in transactions drawn from the generative process underlying the data. We present the first sampling-based algorithm to mine, with high confidence, a rigorous approximation of the frequent sequential patterns from massive datasets. We also present the first algorithms to mine approximations of the true frequent sequential patterns with rigorous guarantees on the quality of the output. Our algorithms are based on novel applications of Vapnik-Chervonenkis dimension and Rademacher complexity, advanced tools from statistical learning theory, to sequential pattern mining. Our extensive experimental evaluation shows that our algorithms provide high-quality approximations for both problems we consider.
      Citation: Algorithms
      PubDate: 2020-05-18
      DOI: 10.3390/a13050123
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 124: A Novel Method for Inference of Chemical
           Compounds of Cycle Index Two with Desired Properties Based on Artificial
           Neural Networks and Integer Programming

    • Authors: Jianshen Zhu, Chenxi Wang, Aleksandar Shurbevski, Hiroshi Nagamochi, Tatsuya Akutsu
      First page: 124
      Abstract: Inference of chemical compounds with desired properties is important for drug design, chemo-informatics, and bioinformatics, to which various algorithmic and machine learning techniques have been applied. Recently, a novel method has been proposed for this inference problem using both artificial neural networks (ANN) and mixed integer linear programming (MILP). This method consists of the training phase and the inverse prediction phase. In the training phase, an ANN is trained so that the output of the ANN takes a value nearly equal to a given chemical property for each sample. In the inverse prediction phase, a chemical structure is inferred using MILP and enumeration so that the structure can have a desired output value for the trained ANN. However, the framework has been applied only to the case of acyclic and monocyclic chemical compounds so far. In this paper, we significantly extend the framework and present a new method for the inference problem for rank-2 chemical compounds (chemical graphs with cycle index 2). The results of computational experiments using such chemical properties as octanol/water partition coefficient, melting point, and boiling point suggest that the proposed method is much more useful than the previous method.
      Citation: Algorithms
      PubDate: 2020-05-18
      DOI: 10.3390/a13050124
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 125: Moving Deep Learning to the Edge

    • Authors: Mário P. Véstias, Rui Policarpo Duarte, José T. de Sousa, Horácio C. Neto
      First page: 125
      Abstract: Deep learning is now present in a wide range of services and applications, replacing and complementing other machine learning algorithms. Performing training and inference of deep neural networks using the cloud computing model is not viable for applications where low latency is required. Furthermore, the rapid proliferation of the Internet of Things will generate a large volume of data to be processed, which will soon overload the capacity of cloud servers. One solution is to process the data at the edge devices themselves, in order to alleviate cloud server workloads and improve latency. However, edge devices are less powerful than cloud servers, and many are subject to energy constraints. Hence, new resource and energy-oriented deep learning models are required, as well as new computing platforms. This paper reviews the main research directions for edge computing deep learning algorithms.
      Citation: Algorithms
      PubDate: 2020-05-18
      DOI: 10.3390/a13050125
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 126: PUB-SalNet: A Pre-Trained Unsupervised
           Self-Aware Backpropagation Network for Biomedical Salient Segmentation

    • Authors: Feiyang Chen, Ying Jiang, Xiangrui Zeng, Jing Zhang, Xin Gao, Min Xu
      First page: 126
      Abstract: Salient segmentation is a critical step in biomedical image analysis, aiming to cut out regions that are most interesting to humans. Recently, supervised methods have achieved promising results in biomedical areas, but they depend on annotated training data sets, which requires labor and proficiency in related background knowledge. In contrast, unsupervised learning makes data-driven decisions by obtaining insights directly from the data themselves. In this paper, we propose a completely unsupervised self-aware network based on pre-training and attentional backpropagation for biomedical salient segmentation, named as PUB-SalNet. Firstly, we aggregate a new biomedical data set from several simulated Cellular Electron Cryo-Tomography (CECT) data sets featuring rich salient objects, different SNR settings, and various resolutions, which is called SalSeg-CECT. Based on the SalSeg-CECT data set, we then pre-train a model specially designed for biomedical tasks as a backbone module to initialize network parameters. Next, we present a U-SalNet network to learn to selectively attend to salient objects. It includes two types of attention modules to facilitate learning saliency through global contrast and local similarity. Lastly, we jointly refine the salient regions together with feature representations from U-SalNet, with the parameters updated by self-aware attentional backpropagation. We apply PUB-SalNet for analysis of 2D simulated and real images and achieve state-of-the-art performance on simulated biomedical data sets. Furthermore, our proposed PUB-SalNet can be easily extended to 3D images. The experimental results on the 2d and 3d data sets also demonstrate the generalization ability and robustness of our method.
      Citation: Algorithms
      PubDate: 2020-05-19
      DOI: 10.3390/a13050126
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 127: The Effect of Different Deep Network
           Architectures upon CNN-Based Gaze Tracking

    • Authors: Hui-Hui Chen, Bor-Jiunn Hwang, Jung-Shyr Wu, Po-Ting Liu
      First page: 127
      Abstract: In this paper, we explore the effect of using different convolutional layers, batch normalization and the global average pooling layer upon a convolutional neural network (CNN) based gaze tracking system. A novel method is proposed to label the participant’s face images as gaze points retrieved from eye tracker while watching videos for building a training dataset that is closer to human visual behavior. The participants can swing their head freely; therefore, the most real and natural images can be obtained without too many restrictions. The labeled data are classified according to the coordinate of gaze and area of interest on the screen. Therefore, varied network architectures are applied to estimate and compare the effects including the number of convolutional layers, batch normalization (BN) and the global average pooling (GAP) layer instead of the fully connected layer. Three schemes, including the single eye image, double eyes image and facial image, with data augmentation are used to feed into neural network to train and evaluate the efficiency. The input image of the eye or face for an eye tracking system is mostly a small-sized image with relatively few features. The results show that BN and GAP are helpful in overcoming the problem to train models and in reducing the amount of network parameters. It is shown that the accuracy is significantly improved when using GAP and BN at the mean time. Overall, the face scheme has a highest accuracy of 0.883 when BN and GAP are used at the mean time. Additionally, comparing to the fully connected layer set to 512 cases, the number of parameters is reduced by less than 50% and the accuracy is improved by about 2%. A detection accuracy comparison of our model with the existing George and Routray methods shows that our proposed method achieves better prediction accuracy of more than 6%.
      Citation: Algorithms
      PubDate: 2020-05-19
      DOI: 10.3390/a13050127
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 128: Change-Point Detection in Autoregressive
           Processes via the Cross-Entropy Method

    • Authors: Lijing Ma, Georgy Sofronov
      First page: 128
      Abstract: It is very often the case that at some moment a time series process abruptly changes its underlying structure and, therefore, it is very important to accurately detect such change-points. In this problem, which is called a change-point (or break-point) detection problem, we need to find a method that divides the original nonstationary time series into a piecewise stationary segments. In this paper, we develop a flexible method to estimate the unknown number and the locations of change-points in autoregressive time series. In order to find the optimal value of a performance function, which is based on the Minimum Description Length principle, we develop a Cross-Entropy algorithm for the combinatorial optimization problem. Our numerical experiments show that the proposed approach is very efficient in detecting multiple change-points when the underlying process has moderate to substantial variations in the mean and the autocorrelation coefficient. We also apply the proposed method to real data of daily AUD/CNY exchange rate series from 2 January 2018 to 24 March 2020.
      Citation: Algorithms
      PubDate: 2020-05-20
      DOI: 10.3390/a13050128
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 129: Image Resolution Enhancement of Highly
           Compressively Sensed CT/PET Signals

    • Authors: Krzysztof Malczewski
      First page: 129
      Abstract: One of the most challenging aspects of medical modalities such as Computed Tomography (CT) as well hybrid techniques such as CT/PET (Computed Tomography/Positron emission tomography) and PET/MRI is finding a balance between examination time, radiation dose, and image quality. The need for a dense sampling grid is associated with two major factors: image resolution enhancement, which leads to a strengthening of human perception, and image features interpretation. All these aspects make an unsupervised image processing much easier. The presented algorithm employs super-resolution-reconstruction with high accuracy motion fields’ estimation at its core for Computed Tomography/Positron Emission Tomography (CT/PET) images enhancement. The suggested method starts with processing compressively sensed input signals. This paper shows that it is possible to achieve higher image resolution while keeping the same radiation dose. The purpose of this paper is to propose a highly effective CT/PET image reconstruction strategy, allowing for simultaneous resolution enhancing and scanning time minimisation. The algorithm aims to overcome two major obstacles—image resolution limitation and algorithm reconstruction time efficiency-by combining a highly-sparse Ridgelet analysis based sampling pattern as well as PET signal sensing with super-resolution (SR) image enhancement. Due to the diverse nature of Computed Tomography, the applied Ridgelet analysis arguing its usability turned out to be efficient in reducing acquisition times in regard to maintaining satisfying scan quality. This paper presents a super-resolution image enhancement algorithm designed for handling highly sensitively compressed hybrid CT/PET scanners raw data. The presented technique allows for improving image resolution while reducing motion artefacts and keeping scanning times at pretty low levels.
      Citation: Algorithms
      PubDate: 2020-05-21
      DOI: 10.3390/a13050129
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 130: Uncertainty Quantification Approach on
           Numerical Simulation for Supersonic Jets Performance

    • Authors: Carlo Cravero, Davide De Domenico, Andrea Ottonello
      First page: 130
      Abstract: One of the main issues addressed in any engineering design problem is to predict the performance of the component or system as accurately and realistically as possible, taking into account the variability of operating conditions or the uncertainty on input data (boundary conditions or geometry tolerance). In this paper, the propagation of uncertainty on boundary conditions through a numerical model of supersonic nozzle is investigated. The evaluation of the statistics of the problem response functions is performed following ‘Surrogate-Based Uncertainty Quantification’. The approach involves: (a) the generation of a response surface starting from a DoE in order to approximate the convergent–divergent ‘physical’ model (expensive to simulate), (b) the application of the UQ technique based on the LHS to the meta-model. Probability Density Functions are introduced for the inlet boundary conditions in order to quantify their effects on the output nozzle performance. The physical problem considered is very relevant for the experimental tests on the UQ approach because of its high non-linearity. A small perturbation to the input data can drive the solution to a completely different output condition. The CFD simulations and the Uncertainty Quantification were performed by coupling the open source Dakota platform with the ANSYS Fluent® CFD commercial software: the process is automated through scripting. The procedure adopted in this work demonstrate the applicability of advanced simulation techniques (such as UQ analysis) to industrial technical problems. Moreover, the analysis highlights the practical use of the uncertainty quantification techniques in predicting the performance of a nozzle design affected by off-design conditions with fluid-dynamic complexity due to strong nonlinearity.
      Citation: Algorithms
      PubDate: 2020-05-22
      DOI: 10.3390/a13050130
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 75: Investigating Feature Selection and Random
           Forests for Inter-Patient Heartbeat Classification

    • Authors: Jose Francisco Saenz-Cogollo, Maurizio Agelli
      First page: 75
      Abstract: Finding an optimal combination of features and classifier is still an open problem in the development of automatic heartbeat classification systems, especially when applications that involve resource-constrained devices are considered. In this paper, a novel study of the selection of informative features and the use of a random forest classifier while following the recommendations of the Association for the Advancement of Medical Instrumentation (AAMI) and an inter-patient division of datasets is presented. Features were selected using a filter method based on the mutual information ranking criterion on the training set. Results showed that normalized beat-to-beat (R–R) intervals and features relative to the width of the ventricular depolarization waves (QRS complex) are the most discriminative among those considered. The best results achieved on the MIT-BIH Arrhythmia Database were an overall accuracy of 96.14% and F1-scores of 97.97%, 73.06%, and 90.85% in the classification of normal beats, supraventricular ectopic beats, and ventricular ectopic beats, respectively. In comparison with other state-of-the-art approaches tested under similar constraints, this work represents one of the highest performances reported to date while relying on a very small feature vector.
      Citation: Algorithms
      PubDate: 2020-03-25
      DOI: 10.3390/a13040075
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 76: Experiments-Based Comparison of Different
           Power Controllers for a Solid Oxide Fuel Cell Against Model Imperfections
           and Delay Phenomena

    • Authors: Wiebke Frenkel, Andreas Rauh, Julia Kersten, Harald Aschemann
      First page: 76
      Abstract: Solid oxide fuel cell systems such as those presented in this paper are not only applicable for a pure supply with electric energy, they can typically also be used in decentralized power stations, i.e., as micro-cogeneration systems for houses, where both electric and thermal energy are required. For that application, obviously, the electric power need is not constant but rather changes over time. In such a way, it essentially depends on the user profiles of said houses which can refer to e.g., private households as well as offices. The power use is furthermore not predefined. For an optimal operation of the fuel cell, we want to adjust the power, to match the need with sufficiently small time constants without the implementation of mid- or long-term electrical storage systems such as battery buffers. To adapt the produced electric power a simple, however, sufficiently robust feedback controller regulating the hydrogen mass flow into the cells is necessary. To achieve this goal, four different controllers, namely, a PI output-feedback controller combined with a feedforward control, an internal model control (IMC) approach, a sliding-mode (SM) controller and a state-feedback controller, are developed and compared in this paper. As the challenge is to find a controller ensuring steady-state accuracy and good tracking behavior despite the nonlinearities and uncertainties of the plant, the comparison was done regarding these requirements. Simulations and experiments show that the IMC outperforms the alternatives with respect to steady-state accuracy and tracking behavior.
      Citation: Algorithms
      PubDate: 2020-03-25
      DOI: 10.3390/a13040076
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 77: On Classical Solutions for A
           Kuramoto—Sinelshchikov—Velarde-Type Equation

    • Authors: Giuseppe Maria Coclite, Lorenzo di Ruvo
      First page: 77
      Abstract: The Kuramoto–Sinelshchikov–Velarde equation describes the evolution of a phase turbulence in reaction-diffusion systems or the evolution of the plane flame propagation, taking into account the combined influence of diffusion and thermal conduction of the gas on the stability of a plane flame front. In this paper, we prove the well-posedness of the classical solutions for the Cauchy problem.
      Citation: Algorithms
      PubDate: 2020-03-28
      DOI: 10.3390/a13040077
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 78: Beyond Newton: A New Root-Finding
           Fixed-Point Iteration for Nonlinear Equations

    • Authors: Ankush Aggarwal, Sanjay Pant
      First page: 78
      Abstract: Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In this short note, we seek to enlarge the basin of attraction of the classical Newton’s method. The key idea is to develop a relatively simple multiplicative transform of the original equations, which leads to a reduction in nonlinearity, thereby alleviating the limitation of Newton’s method. Based on this idea, we derive a new class of iterative methods and rediscover Halley’s method as the limit case. We present the application of these methods to several mathematical functions (real, complex, and vector equations). Across all examples, our numerical experiments suggest that the new methods converge for a significantly wider range of initial guesses. For scalar equations, the increase in computational cost per iteration is minimal. For vector functions, more extensive analysis is needed to compare the increase in cost per iteration and the improvement in convergence of specific problems.
      Citation: Algorithms
      PubDate: 2020-03-29
      DOI: 10.3390/a13040078
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 79: About Granular Rough Computing—Overview
           of Decision System Approximation Techniques and Future Perspectives

    • Authors: Piotr Artiemjew
      First page: 79
      Abstract: Granular computing techniques are a huge discipline in which the basic component is to operate on groups of similar objects according to a fixed similarity measure. The first references to the granular computing can be seen in the works of Zadeh in fuzzy set theory. Granular computing allows for a very natural modelling of the world. It is very likely that the human brain, while solving problems, performs granular calculations on data collected from the senses. The researchers of this paradigm have proven the unlimited possibilities of granular computing. Among other things, they are used in the processes of classification, regression, missing values handling, for feature selection, and as mechanisms of data approximation. It is impossible to quote all methods based on granular computing—we can only discuss a selected group of techniques. In the article, we have presented a review of recently developed granulation techniques belonging to the family of approximation algorithms founded by Polkowski—in the framework of rough set theory. Starting from the basic Polkowski’s standard granulation, we have described further developed by us concept dependent, layered, and epsilon variants, and our recent homogeneous granulation. We are presenting simple numerical examples and samples of research results. The effectiveness of these methods in terms of decision system size reduction and maintenance of the internal knowledge from the original data are presented. The reduction in the number of objects in our techniques while maintaining classification efficiency reaches 90 percent—for standard granulation with usage of a kNN classifier (we achieve similar efficiency for the concept-dependent technique for the Naive Bayes classifier). The largest reduction achieved in the number of exhaustive set of rules at the efficiency level to the original data are 99 percent—it is for concept-dependent granulation. In homogeneous variants, the reduction is less than 60 percent, but the advantage of these techniques is that it is not necessary to look for optimal granulation parameters, which are selected dynamically. We also describe potential directions of development of granular computing techniques by prism of described methods.
      Citation: Algorithms
      PubDate: 2020-03-29
      DOI: 10.3390/a13040079
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 80: Hierarchical-Matching-Based Online and
           Real-Time Multi-Object Tracking with Deep Appearance Features

    • Authors: Ji, Yu, Wu
      First page: 80
      Abstract: Based on tracking-by-detection, we propose a hierarchical-matching-based online and real-time multi-object tracking approach with deep appearance features, which can effectively reduce the false positives (FP) in tracking. For the purpose of increasing the accuracy rate of data association, we define the trajectory confidence using its position information, appearance information, and the information of historical relevant detections, after which we can classify the trajectories into different levels. In order to obtain discriminative appearance features, we developed a deep convolutional neural network to extract the appearance features of objects and trained it on a large-scale pedestrian re-identification dataset. Last but not least, we used the proposed diverse and hierarchical matching strategy to associate detection and trajectory sets. Experimental results on the MOT benchmark dataset show that our proposed approach performs well against other online methods, especially for the metrics of FP and frames per second (FPS).
      Citation: Algorithms
      PubDate: 2020-03-29
      DOI: 10.3390/a13040080
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 81: Detection and Monitoring of Bottom-Up
           Cracks in Road Pavement Using a Machine-Learning Approach

    • Authors: Filippo Giammaria Praticò, Rosario Fedele, Vitalii Naumov, Tomas Sauer
      First page: 81
      Abstract: The current methods that aim at monitoring the structural health status (SHS) of road pavements allow detecting surface defects and failures. This notwithstanding, there is a lack of methods and systems that are able to identify concealed cracks (particularly, bottom-up cracks) and monitor their growth over time. For this reason, the objective of this study is to set up a supervised machine learning (ML)-based method for the identification and classification of the SHS of a differently cracked road pavement based on its vibro-acoustic signature. The method aims at collecting these signatures (using acoustic-sensors, located at the roadside) and classifying the pavement’s SHS through ML models. Different ML classifiers (i.e., multilayer perceptron, MLP, convolutional neural network, CNN, random forest classifier, RFC, and support vector classifier, SVC) were used and compared. Results show the possibility of associating with great accuracy (i.e., MLP = 91.8%, CNN = 95.6%, RFC = 91.0%, and SVC = 99.1%) a specific vibro-acoustic signature to a differently cracked road pavement. These results are encouraging and represent the bases for the application of the proposed method in real contexts, such as monitoring roads and bridges using wireless sensor networks, which is the target of future studies.
      Citation: Algorithms
      PubDate: 2020-03-31
      DOI: 10.3390/a13040081
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 82: Algebraic Point Projection for Immersed
           Boundary Analysis on Low Degree NURBS Curves and Surfaces

    • Authors: Huanyu Liao, Pavan Kumar Vaitheeswaran, Tao Song, Ganesh Subbarayan
      First page: 82
      Abstract: Point projection is an important geometric need when boundaries described by parametric curves and surfaces are immersed in domains. In problems where an immersed parametric boundary evolves with time as in solidification or fracture analysis, the projection from a point in the domain to the boundary is necessary to determine the interaction of the moving boundary with the underlying domain approximation. Furthermore, during analysis, since the driving force behind interface evolution depends on locally computed curvatures and normals, it is ideal if the parametric entity is not approximated as piecewise-linear. To address this challenge, we present in this paper an algebraic procedure to project a point on to Non-uniform rational B-spline (NURBS) curves and surfaces. The developed technique utilizes the resultant theory to construct implicit forms of parametric Bézier patches, level sets of which are termed algebraic level sets (ALS). Boolean compositions of the algebraic level sets are carried out using the theory of R-functions. The algebraic level sets and their gradients at a given point on the domain are then used to project the point onto the immersed boundary. Beginning with a first-order algorithm, sequentially refined procedures culminating in a second-order projection algorithm are described for NURBS curves and surfaces. Examples are presented to illustrate the efficiency and robustness of the developed method. More importantly, the method is shown to be robust and able to generate valid solutions even for curves and surfaces with high local curvature or G 0 continuity—problems where the Newton–Raphson method fails due to discontinuity in the projected points or because the numerical iterations fail to converge to a solution, respectively.
      Citation: Algorithms
      PubDate: 2020-03-31
      DOI: 10.3390/a13040082
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 83: Ensemble Deep Learning for Multilabel
           Binary Classification of User-Generated Content

    • Authors: Giannis Haralabopoulos, Ioannis Anagnostopoulos, Derek McAuley
      First page: 83
      Abstract: Sentiment analysis usually refers to the analysis of human-generated content via a polarity filter. Affective computing deals with the exact emotions conveyed through information. Emotional information most frequently cannot be accurately described by a single emotion class. Multilabel classifiers can categorize human-generated content in multiple emotional classes. Ensemble learning can improve the statistical, computational and representation aspects of such classifiers. We present a baseline stacked ensemble and propose a weighted ensemble. Our proposed weighted ensemble can use multiple classifiers to improve classification results without hyperparameter tuning or data overfitting. We evaluate our ensemble models with two datasets. The first dataset is from Semeval2018-Task 1 and contains almost 7000 Tweets, labeled with 11 sentiment classes. The second dataset is the Toxic Comment Dataset with more than 150,000 comments, labeled with six different levels of abuse or harassment. Our results suggest that ensemble learning improves classification results by 1.5 % to 5.4 % .
      Citation: Algorithms
      PubDate: 2020-04-01
      DOI: 10.3390/a13040083
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 84: Representation of Traffic Congestion Data
           for Urban Road Traffic Networks Based on Pooling Operations

    • Authors: Sen Zhang, Shaobo Li, Xiang Li, Yong Yao
      First page: 84
      Abstract: In order to improve the efficiency of transportation networks, it is critical to forecast traffic congestion. Large-scale traffic congestion data have become available and accessible, yet they need to be properly represented in order to avoid overfitting, reduce the requirements of computational resources, and be utilized effectively by various methodologies and models. Inspired by pooling operations in deep learning, we propose a representation framework for traffic congestion data in urban road traffic networks. This framework consists of grid-based partition of urban road traffic networks and a pooling operation to reduce multiple values into an aggregated one. We also propose using a pooling operation to calculate the maximum value in each grid (MAV). Raw snapshots of traffic congestion maps are transformed and represented as a series of matrices which are used as inputs to a spatiotemporal congestion prediction network (STCN) to evaluate the effectiveness of representation when predicting traffic congestion. STCN combines convolutional neural networks (CNNs) and long short-term memory neural network (LSTMs) for their spatiotemporal capability. CNNs can extract spatial features and dependencies of traffic congestion between roads, and LSTMs can learn their temporal evolution patterns and correlations. An empirical experiment on an urban road traffic network shows that when incorporated into our proposed representation framework, MAV outperforms other pooling operations in the effectiveness of the representation of traffic congestion data for traffic congestion prediction, and that the framework is cost-efficient in terms of computational resources.
      Citation: Algorithms
      PubDate: 2020-04-02
      DOI: 10.3390/a13040084
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 85: Research and Study of the Hybrid Algorithms
           Based on the Collective Behavior of Fish Schools and Classical
           Optimization Methods

    • Authors: Demidova, Gorchakov
      First page: 85
      Abstract: Inspired by biological systems, swarm intelligence algorithms are widely used to solve multimodal optimization problems. In this study, we consider the hybridization problem of an algorithm based on the collective behavior of fish schools. The algorithm is computationally inexpensive compared to other population-based algorithms. Accuracy of fish school search increases with the increase of predefined iteration count, but this also affects computation time required to find a suboptimal solution. We propose two hybrid approaches, intending to improve the evolutionary-inspired algorithm accuracy by using classical optimization methods, such as gradient descent and Newton's optimization method. The study shows the effectiveness of the proposed hybrid algorithms, and the strong advantage of the hybrid algorithm based on fish school search and gradient descent. We provide a solution for the linearly inseparable exclusive disjunction problem using the developed algorithm and a perceptron with one hidden layer. To demonstrate the effectiveness of the algorithms, we visualize high dimensional loss surfaces near global extreme points. In addition, we apply the distributed version of the most effective hybrid algorithm to the hyperparameter optimization problem of a neural network.
      Citation: Algorithms
      PubDate: 2020-04-03
      DOI: 10.3390/a13040085
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 86: Confidence-Based Voting for the Design of
           Interpretable Ensembles with Fuzzy Systems

    • Authors: Vladimir Stanovov, Shakhnaz Akhmedova, Yukihiro Kamiya
      First page: 86
      Abstract: In this study, a new voting procedure for combining the fuzzy logic based classifiers and other classifiers called confidence-based voting is proposed. This method combines two classifiers, namely the fuzzy classification system, and for the cases when the fuzzy system returns high confidence levels, i.e., the returned membership value is large, the fuzzy system is used to perform classification, otherwise, the second classifier is applied. As a result, most of the sample is classified by the explainable and interpretable fuzzy system, and the second, more accurate, but less interpretable classifier is applied only for the most difficult cases. To show the efficiency of the proposed approach, a set of experiments is performed on test datasets, as well as two problems of estimating the person’s emotional state with the data collected by non-contact vital sensors, which use the Doppler effect. To validate the accuracies of the proposed approach, the statistical tests were used for comparison. The obtained results demonstrate the efficiency of the proposed technique, as it allows for both improving the classification accuracy and explaining the decision making process.
      Citation: Algorithms
      PubDate: 2020-04-06
      DOI: 10.3390/a13040086
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 87: The Need for Machine-Processable Agreements
           in Health Data Management

    • Authors: George Konstantinidis, Adriane Chapman, Mark J. Weal, Ahmed Alzubaidi, Lisa M. Ballard, Anneke M. Lucassen
      First page: 87
      Abstract: Data processing agreements in health data management are laid out by organisations in monolithic “Terms and Conditions” documents written in natural legal language. These top-down policies usually protect the interest of the service providers, rather than the data owners. They are coarse-grained and do not allow for more than a few opt-in or opt-out options for individuals to express their consent on personal data processing, and these options often do not transfer to software as they were intended to. In this paper, we study the problem of health data sharing and we advocate the need for individuals to describe their personal contract of data usage in a formal, machine-processable language. We develop an application for sharing patient genomic information and test results, and use interactions with patients and clinicians in order to identify the particular peculiarities a privacy/policy/consent language should offer in this complicated domain. We present how Semantic Web technologies can have a central role in this approach by providing the formal tools and features required in such a language. We present our ongoing approach to construct an ontology-based framework and a policy language that allows patients and clinicians to express fine-grained consent, preferences or suggestions on sharing medical information. Our language offers unique features such as multi-party ownership of data or data sharing dependencies. We evaluate the landscape of policy languages from different areas, and show how they are lacking major requirements needed in health data management. In addition to enabling patients, our approach helps organisations increase technological capabilities, abide by legal requirements, and save resources.
      Citation: Algorithms
      PubDate: 2020-04-07
      DOI: 10.3390/a13040087
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 88: Feasibility Pump Algorithm for Sparse
           Representation under Gaussian Noise

    • Authors: Florin Ilarion Miertoiu, Bogdan Dumitrescu
      First page: 88
      Abstract: In this paper, the Feasibility Pump is adapted for the problem of sparse representations of signals affected by Gaussian noise. This adaptation is tested and then compared to Orthogonal Matching Pursuit (OMP) and the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The feasibility pump recovers the true support much better than the other two algorithms and, as the SNR decreases and the support size increases, it has a smaller recovery and representation error when compared with its competitors. It is observed that, in order for the algorithm to be efficient, a regularization parameter and a weight term for the error are needed.
      Citation: Algorithms
      PubDate: 2020-04-09
      DOI: 10.3390/a13040088
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 89: Success History-Based Position Adaptation
           in Fuzzy-Controlled Ensemble of Biology-Inspired Algorithms

    • Authors: Shakhnaz Akhmedova, Vladimir Stanovov, Danil Erokhin, Olga Semenkina
      First page: 89
      Abstract: In this study, a new modification of the meta-heuristic approach called Co-Operation of Biology-Related Algorithms (COBRA) is proposed. Originally the COBRA approach was based on a fuzzy logic controller and used for solving real-parameter optimization problems. The basic idea consists of a cooperative work of six well-known biology-inspired algorithms, referred to as components. However, it was established that the search efficiency of COBRA depends on its ability to keep the exploitation and exploration balance when solving optimization problems. The new modification of the COBRA approach is based on other method for generating potential solutions. This method keeps a historical memory of successful positions found by individuals to lead them in different directions and therefore to improve the exploitation and exploration capabilities. The proposed technique was applied to the COBRA components and to its basic steps. The newly proposed meta-heuristic as well as other modifications of the COBRA approach and components were evaluated on three sets of various benchmark problems. The experimental results obtained by all algorithms with the same computational effort are presented and compared. It was concluded that the proposed modification outperformed other algorithms used in comparison. Therefore, its usefulness and workability were demonstrated.
      Citation: Algorithms
      PubDate: 2020-04-09
      DOI: 10.3390/a13040089
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 90: Application of Generalized Polynomial Chaos
           for Quantification of Uncertainties of Time Averages and Their
           Sensitivities in Chaotic Systems

    • Authors: Kyriakos Dimitrios Kantarakias, George Papadakis
      First page: 90
      Abstract: In this paper, we consider the effect of stochastic uncertainties on non-linear systems with chaotic behavior. More specifically, we quantify the effect of parametric uncertainties to time-averaged quantities and their sensitivities. Sampling methods for Uncertainty Quantification (UQ), such as the Monte–Carlo (MC), are very costly, while traditional methods for sensitivity analysis, such as the adjoint, fail in chaotic systems. In this work, we employ the non-intrusive generalized Polynomial Chaos (gPC) for UQ, coupled with the Multiple-Shooting Shadowing (MSS) algorithm for sensitivity analysis of chaotic systems. It is shown that the gPC, coupled with MSS, is an appropriate method for conducting UQ in chaotic systems and produces results that match well with those from MC and Finite-Differences (FD).
      Citation: Algorithms
      PubDate: 2020-04-13
      DOI: 10.3390/a13040090
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 91: A Generalized Alternating Linearization
           Bundle Method for Structured Convex Optimization with Inexact First-Order
           Oracles

    • Authors: Chunming Tang, Yanni Li, Xiaoxia Dong, Bo He
      First page: 91
      Abstract: In this paper, we consider a class of structured optimization problems whose objective function is the summation of two convex functions: f and h, which are not necessarily differentiable. We focus particularly on the case where the function f is general and its exact first-order information (function value and subgradient) may be difficult to obtain, while the function h is relatively simple. We propose a generalized alternating linearization bundle method for solving this class of problems, which can handle inexact first-order information of on-demand accuracy. The inexact information can be very general, which covers various oracles, such as inexact, partially inexact and asymptotically exact oracles, and so forth. At each iteration, the algorithm solves two interrelated subproblems: one aims to find the proximal point of the polyhedron model of f plus the linearization of h; the other aims to find the proximal point of the linearization of f plus h. We establish global convergence of the algorithm under different types of inexactness. Finally, some preliminary numerical results on a set of two-stage stochastic linear programming problems show that our method is very encouraging.
      Citation: Algorithms
      PubDate: 2020-04-14
      DOI: 10.3390/a13040091
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 92: Deterministic Coresets for k-Means of
           Big Sparse Data

    • Authors: Artem Barger, Dan Feldman
      First page: 92
      Abstract: Let P be a set of n points in R d , k ≥ 1 be an integer and ε ∈ ( 0 , 1 ) be a constant. An ε-coreset is a subset C ⊆ P with appropriate non-negative weights (scalars), that approximates any given set Q ⊆ R d of k centers. That is, the sum of squared distances over every point in P to its closest point in Q is the same, up to a factor of 1 ± ε to the weighted sum of C to the same k centers. If the coreset is small, we can solve problems such as k-means clustering or its variants (e.g., discrete k-means, where the centers are restricted to be in P, or other restricted zones) on the small coreset to get faster provable approximations. Moreover, it is known that such coreset support streaming, dynamic and distributed data using the classic merge-reduce trees. The fact that the coreset is a subset implies that it preserves the sparsity of the data. However, existing such coresets are randomized and their size has at least linear dependency on the dimension d. We suggest the first such coreset of size independent of d. This is also the first deterministic coreset construction whose resulting size is not exponential in d. Extensive experimental results and benchmarks are provided on public datasets, including the first coreset of the English Wikipedia using Amazon’s cloud.
      Citation: Algorithms
      PubDate: 2020-04-14
      DOI: 10.3390/a13040092
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 93: Path Planning for Laser Cladding Robot on
           Artificial Joint Surface Based on Topology Reconstruction

    • Authors: Yuanjin Li, Tao Chen, Defu Liu
      First page: 93
      Abstract: Artificial joint surface coating is a hot issue in the interdisciplinary fields of manufacturing, materials and biomedicine. Due to the complex surface characteristics of artificial joints, there are some problems with efficiency and precision in automatic cladding path planning for coating fabrication. In this study, a path planning method for a laser cladding robot for artificial joints surface was proposed. The key of this method was the topological reconstruction of the artificial joint surface. On the basis of the topological relation, a set of parallel planes were used to intersect the CAD model to generate a set of continuous, directed and equidistant surface transversals on the artificial joint surface. The arch height error method was used to extract robot interpolation points from surface transversal lines according to machining accuracy requirements. The coordinates and normal vectors of interpolation points were used to calculate the position and pose of the robot tool center point (TCP). To ensure that the laser beam was always perpendicular to the artificial joint surface, a novel laser cladding set-up with a robot was designed, of which the joint part clamped by a six-axis robot moved while the laser head was fixed on the workbench. The proposed methodology was validated with the planned path on the surface of an artificial acetabular cup using simulation and experimentation via an industrial NACHI robot. The results indicated that the path planning method based on topological reconstruction was feasible and more efficient than the traditional robot teaching method.
      Citation: Algorithms
      PubDate: 2020-04-15
      DOI: 10.3390/a13040093
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 94: Stochastic Models to Qualify Stem Tapers

    • Authors: Petrauskas, Rupšys, Narmontas, Aleinikovas, Beniušienė, Šilinskas
      First page: 94
      Abstract: This study examines the performance of 11 tree taper models to predict the diameter of bark at any given height and the total stem volume of eight dominant tree species in the boreal forests of Lithuania. Here, we develop eight new models using stochastic differential equations (SDEs). The symmetrical Vasicek model and asymmetrical Gompertz model are used to describe tree taper evolution, as well as geometric-type diffusion processes. These models are compared with those traditionally used for four tree taper models by using performance statistics and residual analysis. The observed dataset consists of longitudinal measurements of 3703 trees, representing the eight dominant tree species in Lithuania (pine, spruce, oak, ash, birch, black alder, white alder, and aspen). Overall, the best goodness of fit statistics of diameter predictions produced the SDE taper models. All results have been implemented in the Maple computer algebra system using the “Statistics” and “VectorCalculus” packages.
      Citation: Algorithms
      PubDate: 2020-04-15
      DOI: 10.3390/a13040094
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 95: How to Identify Varying Lead–Lag Effects
           in Time Series Data: Implementation, Validation, and Application of the
           Generalized Causality Algorithm

    • Authors: Johannes Stübinger, Katharina Adler
      First page: 95
      Abstract: This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This procedure allows an elastic adjustment of the time axis to find similar but phase-shifted sequences—structural breaks in their relationship are also captured. A large-scale simulation study validates the outperformance in the vast majority of parameter constellations in terms of efficiency, robustness, and feasibility. Finally, the presented methodology is applied to real data from the areas of macroeconomics, finance, and metal. Highest similarity show the pairs of gross domestic product and consumer price index (macroeconomics), S&P 500 index and Deutscher Aktienindex (finance), as well as gold and silver (metal). In addition, the algorithm takes full use of its flexibility and identifies both various structural breaks and regime patterns over time, which are (partly) well documented in the literature.
      Citation: Algorithms
      PubDate: 2020-04-16
      DOI: 10.3390/a13040095
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 96: A Hybrid Grasshopper Optimization Algorithm
           Applied to the Open Vehicle Routing Problem

    • Authors: Valeria Soto-Mendoza, Irma García-Calvillo, Efraín Ruiz-y-Ruiz, Jaime Pérez-Terrazas
      First page: 96
      Abstract: This paper presents a hybrid grasshopper optimization algorithm using a novel decoder and local search to solve instances of the open vehicle routing problem with capacity and distance constraints. The algorithm’s decoder first defines the number of vehicles to be used and then it partitions the clients, assigning them to the available routes. The algorithm performs a local search in three neighborhoods after decoding. When a new best solution is found, every route is locally optimized by solving a traveling salesman problem, considering the depot and clients in the route. Three sets containing a total of 30 benchmark problems from the literature were used to test the algorithm. The experiments considered two cases of the problem. In the first, the primary objective is to minimize the total number of vehicles and then the total distance to be traveled. In the second case, the total distance traveled by the vehicles is minimized. The obtained results showed the algorithm’s proficient performance. For the first case, the algorithm was able to improve or match the best-known solutions for 21 of the 30 benchmark problems. For the second case, the best-known solutions for 18 of the 30 benchmark problems were found or improved by the algorithm. Finally, a case study from a real-life problem is included.
      Citation: Algorithms
      PubDate: 2020-04-16
      DOI: 10.3390/a13040096
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 97: Performance Assessment of Predictive
           Control— A Survey

    • Authors: Paweł D. Domański
      First page: 97
      Abstract: Model Predictive Control constitutes an important element of any modern control system. There is growing interest in this technology. More and more advanced predictive structures have been implemented. The first applications were in chemical engineering, and now Model Predictive Control can be found in almost all kinds of applications, from the process industry to embedded control systems or for autonomous objects. Currently, each implementation of a control system requires strict financial justification. Application engineers need tools to measure and quantify the quality of the control and the potential for improvement that may be achieved by retrofitting control systems. Furthermore, a successful implementation of predictive control must conform to prior estimations not only during commissioning, but also during regular daily operations. The system must sustain the quality of control performance. The assessment of Model Predictive Control requires a suitable, often specific, methodology and comparative indicators. These demands establish the rationale of this survey. Therefore, the paper collects and summarizes control performance assessment methods specifically designed for and utilized in predictive control. These observations present the picture of the assessment technology. Further generalization leads to the formulation of a control assessment procedure to support control application engineers.
      Citation: Algorithms
      PubDate: 2020-04-17
      DOI: 10.3390/a13040097
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 98: Numerical Simulation of Non-Linear Models
           of Reaction—Diffusion for a DGT Sensor

    • Authors: Joan Cecilia Averós, Jaume Puy Llorens, Ramiro Uribe-Kaffure
      First page: 98
      Abstract: In this work, we present a novel strategy for the numerical solution of a coupled system of partial differential equations that describe reaction–diffusion processes of a mixture of metals and ligands that can be absorbed by a sensor or a microorganism, in an aqueous medium. The novelty introduced in this work consisted of an adequate database management in conjunction with a direct iterative schema, which allowed the construction of simple, fast and efficient algorithms. Except in really adverse conditions, the calculation is converging and satisfactory solutions were reached. Computing times showed to be better than those obtained with some commercial programs. Although we concentrate on the solution for a particular system (Diffusive Gradients in Thin Films [DGT] sensors), the proposed algorithm does not require major modifications to consider new theoretical or experimental configurations. Since the quality of numerical simulations of reaction–diffusion problems often faces some drawbacks as the values of reaction rate constants increase, some additional effort has been invested in obtaining proper solutions in those cases.
      Citation: Algorithms
      PubDate: 2020-04-20
      DOI: 10.3390/a13040098
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 99: A New Lossless DNA Compression Algorithm
           Based on A Single-Block Encoding Scheme

    • Authors: Deloula Mansouri, Xiaohui Yuan, Abdeldjalil Saidani
      First page: 99
      Abstract: With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge for bioinformatics. Therefore, to overcome these challenges, compression has become necessary. In this paper, we describe a new reference-free DNA compressor abbreviated as DNAC-SBE. DNAC-SBE is a lossless hybrid compressor that consists of three phases. First, starting from the largest base (Bi), the positions of each Bi are replaced with ones and the positions of other bases that have smaller frequencies than Bi are replaced with zeros. Second, to encode the generated streams, we propose a new single-block encoding scheme (SEB) based on the exploitation of the position of neighboring bits within the block using two different techniques. Finally, the proposed algorithm dynamically assigns the shorter length code to each block. Results show that DNAC-SBE outperforms state-of-the-art compressors and proves its efficiency in terms of special conditions imposed on compressed data, storage space and data transfer rate regardless of the file format or the size of the data.
      Citation: Algorithms
      PubDate: 2020-04-20
      DOI: 10.3390/a13040099
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 100: A Survey of Low-Rank Updates of
           Preconditioners for Sequences of Symmetric Linear Systems

    • Authors: Luca Bergamaschi
      First page: 100
      Abstract: The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , … arising in many scientific applications, such as discretization of transient Partial Differential Equations (PDEs), solution of eigenvalue problems, (Inexact) Newton methods applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. In this paper, we will analyze a number of techniques of updating a given initial preconditioner by a low-rank matrix with the aim of improving the clustering of eigenvalues around 1, in order to speed-up the convergence of the Preconditioned Conjugate Gradient (PCG) method. We will also review some techniques to efficiently approximate the linearly independent vectors which constitute the low-rank corrections and whose choice is crucial for the effectiveness of the approach. Numerical results on real-life applications show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates.
      Citation: Algorithms
      PubDate: 2020-04-21
      DOI: 10.3390/a13040100
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 101: A New Way to Store Simple Text Files

    • Authors: Marcin Lawnik, Artur Pełka, Adrian Kapczyński
      First page: 101
      Abstract: In the era of ubiquitous digitization, the Internet of Things (IoT), information plays a vital role. All types of data are collected, and some of this data are stored as text files. An important aspect—regardless of the type of data—is related to file storage, especially the amount of disk space that is required. The less space is used on storing data sets, the lower is the cost of this service. Another important aspect of storing data warehouses in the form of files is the cost of data transmission needed for file transfer and its processing. Moreover, the data that are stored should be minimally protected against access and reading by other entities. The aspects mentioned above are particularly important for large data sets like Big Data. Considering the above criteria, i.e., minimizing storage space, data transfer, ensuring minimum security, the main goal of the article was to show the new way of storing text files. This article presents a method that converts data from text files like txt, json, html, py to images (image files) in png format. Taking into account such criteria as the output size of the file, the results obtained for the test files confirm that presented method enables to reduce the need for disk space, as well as to hide data in an image file. The described method can be used for texts saved in extended ASCII and UTF-8 coding.
      Citation: Algorithms
      PubDate: 2020-04-22
      DOI: 10.3390/a13040101
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 102: A Case Study for a Big Data and Machine
           Learning Platform to Improve Medical Decision Support in Population Health
           Management

    • Authors: Fernando López-Martínez, Edward Rolando Núñez-Valdez, Vicente García-Díaz, Zoran Bursac
      First page: 102
      Abstract: Big data and artificial intelligence are currently two of the most important and trending pieces for innovation and predictive analytics in healthcare, leading the digital healthcare transformation. Keralty organization is already working on developing an intelligent big data analytic platform based on machine learning and data integration principles. We discuss how this platform is the new pillar for the organization to improve population health management, value-based care, and new upcoming challenges in healthcare. The benefits of using this new data platform for community and population health include better healthcare outcomes, improvement of clinical operations, reducing costs of care, and generation of accurate medical information. Several machine learning algorithms implemented by the authors can use the large standardized datasets integrated into the platform to improve the effectiveness of public health interventions, improving diagnosis, and clinical decision support. The data integrated into the platform come from Electronic Health Records (EHR), Hospital Information Systems (HIS), Radiology Information Systems (RIS), and Laboratory Information Systems (LIS), as well as data generated by public health platforms, mobile data, social media, and clinical web portals. This massive volume of data is integrated using big data techniques for storage, retrieval, processing, and transformation. This paper presents the design of a digital health platform in a healthcare organization in Colombia to integrate operational, clinical, and business data repositories with advanced analytics to improve the decision-making process for population health management.
      Citation: Algorithms
      PubDate: 2020-04-23
      DOI: 10.3390/a13040102
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 103: Practical Grammar Compression Based on
           Maximal Repeats †

    • Authors: Isamu Furuya, Takuya Takagi, Yuto Nakashima, Shunsuke Inenaga, Hideo Bannai, Takuya Kida
      First page: 103
      Abstract: This study presents an analysis of RePair, which is a grammar compression algorithm known for its simple scheme,while also being practically effective. First, we show that the main process of RePair, that is, the step by step substitution of the most frequent symbol pairs, works within the corresponding most frequent maximal repeats. Then, we reveal the relation between maximal repeats and grammars constructed by RePair. On the basis of this analysis, we further propose a novel variant of RePair, called MR-RePair, which considers the one-time substitution of the most frequent maximal repeats instead of the consecutive substitution of the most frequent pairs. The results of the experiments comparing the size of constructed grammars and execution time of RePair and MR-RePair on several text corpora demonstrate that MR-RePair constructs more compact grammars than RePair does, especially for highly repetitive texts.
      Citation: Algorithms
      PubDate: 2020-04-23
      DOI: 10.3390/a13040103
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 104: Decision Support System for Fitting and
           Mapping Nonlinear Functions with Application to Insect Pest Management in
           the Biological Control Context

    • Authors: Guimapi, Mohamed, Biber-Freudenberger, Mwangi, Ekesi, Borgemeister, Tonnang
      First page: 104
      Abstract: The process of moving from experimental data to modeling and characterizing the dynamics and interactions in natural processes is a challenging task. This paper proposes an interactive platform for fitting data derived from experiments to mathematical expressions and carrying out spatial visualization. The platform is designed using a component-based software architectural approach, implemented in R and the Java programming languages. It uses experimental data as input for model fitting, then applies the obtained model at the landscape level via a spatial temperature grid data to yield regional and continental maps. Different modules and functionalities of the tool are presented with a case study, in which the tool is used to establish a temperature-dependent virulence model and map the potential zone of efficacy of a fungal-based biopesticide. The decision support system (DSS) was developed in generic form, and it can be used by anyone interested in fitting mathematical equations to experimental data collected following the described protocol and, depending on the type of investigation, it offers the possibility of projecting the model at the landscape level.
      Citation: Algorithms
      PubDate: 2020-04-24
      DOI: 10.3390/a13040104
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 105: Variational Specific Mode Extraction: A
           Novel Method for Defect Signal Detection of Ferromagnetic Pipeline

    • Authors: Ju, Wang, Zhao
      First page: 105
      Abstract: The non-contact detection of buried ferromagnetic pipeline is a long-standing problem in the field of inspection of outside pipelines, and the extraction of magnetic anomaly signal is a prerequisite for accurate detection. Pipeline defects can cause the fluctuation of magnetic signals, which are easily submerged in wide-band background noise without external excitation sources. Previously, Variational Mode Decomposition (VMD) was used to separate modal components; however, VMD is based on narrow-band signal processing algorithm and the calculation is complex. In this article, a method of pipeline defect signal based on Variational Specific Mode Extraction (VSME) is employed to extract the signal of a specific central frequency by signal modal decomposition, i.e., the specific mode is weak magnetic anomaly signal of pipeline defects. VSME is based on the fact that a wide-band signal can be converted into a narrow-band signal by demodulation method. Furthermore, the problem of wide-band signal decomposition is expressed as an optimal demodulation problem, which can be solved by alternating direction method of multipliers. The proposed algorithm is verified by artificially synthesized signals, and its performance is better than that of VMD. The results showed that the VSME method can extract the magnetic anomaly signal of pipeline damage using experimental data, while obtaining a better accuracy.
      Citation: Algorithms
      PubDate: 2020-04-24
      DOI: 10.3390/a13040105
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 66: Observability of Uncertain Nonlinear
           Systems Using Interval Analysis

    • Authors: Thomas Paradowski, Sabine Lerch, Michelle Damaszek, Robert Dehnert, Bernd Tibken
      First page: 66
      Abstract: In the field of control engineering, observability of uncertain nonlinear systems is often neglected and not examined. This is due to the complex analytical calculations required for the verification. Therefore, the aim of this work is to provide an algorithm which numerically analyzes the observability of nonlinear systems described by finite-dimensional, continuous-time sets of ordinary differential equations. The algorithm is based on definitions for distinguishability and local observability using a rank check from which conditions are deduced. The only requirements are the uncertain model equations of the system. Further, the methodology verifies observability of nonlinear systems on a given state space. In case that the state space is not fully observable, the algorithm provides the observable set of states. In addition, the results obtained by the algorithm allows insight into why the remaining states cannot be distinguished.
      Citation: Algorithms
      PubDate: 2020-03-16
      DOI: 10.3390/a13030066
      Issue No: Vol. 13, No. 3 (2020)
       
  • Algorithms, Vol. 13, Pages 67: Optimizing Convolutional Neural Network
           Hyperparameters by Enhanced Swarm Intelligence Metaheuristics

    • Authors: Nebojsa Bacanin, Timea Bezdan, Eva Tuba, Ivana Strumberger, Milan Tuba
      First page: 67
      Abstract: Computer vision is one of the most frontier technologies in computer science. It is used to build artificial systems to extract valuable information from images and has a broad range of applications in various areas such as agriculture, business, and healthcare. Convolutional neural networks represent the key algorithms in computer vision, and in recent years, they have attained notable advances in many real-world problems. The accuracy of the network for a particular task profoundly relies on the hyperparameters’ configuration. Obtaining the right set of hyperparameters is a time-consuming process and requires expertise. To approach this concern, we propose an automatic method for hyperparameters’ optimization and structure design by implementing enhanced metaheuristic algorithms. The aim of this paper is twofold. First, we propose enhanced versions of the tree growth and firefly algorithms that improve the original implementations. Second, we adopt the proposed enhanced algorithms for hyperparameters’ optimization. First, the modified metaheuristics are evaluated on standard unconstrained benchmark functions and compared to the original algorithms. Afterward, the improved algorithms are employed for the network design. The experiments are carried out on the famous image classification benchmark dataset, the MNIST dataset, and comparative analysis with other outstanding approaches that were tested on the same problem is conducted. The experimental results show that both proposed improved methods establish higher performance than the other existing techniques in terms of classification accuracy and the use of computational resources.
      Citation: Algorithms
      PubDate: 2020-03-17
      DOI: 10.3390/a13030067
      Issue No: Vol. 13, No. 3 (2020)
       
  • Algorithms, Vol. 13, Pages 68: Adding Edges for Maximizing Weighted
           Reachability

    • Authors: Federico Corò, Gianlorenzo D’Angelo, Cristina M. Pinotti
      First page: 68
      Abstract: In this paper, we consider the problem of improving the reachability of a graph. We approach the problem from a graph augmentation perspective, in which a limited set size of edges is added to the graph to increase the overall number of reachable nodes. We call this new problem the Maximum Connectivity Improvement (MCI) problem. We first show that, for the purpose of solve solving MCI, we can focus on Directed Acyclic Graphs (DAG) only. We show that approximating the MCI problem on DAG to within any constant factor greater than 1 - M M 1 / e is NP -hard even if we restrict to graphs with a single source or a single sink, and the problem remains NP -complete if we further restrict to unitary weights. Finally, this paper presents a dynamic programming algorithm for the MCI problem on trees with a single source that produces optimal solutions in polynomial time. Then, we propose two polynomial-time greedy algorithms that guarantee ( 1 - M M 1 / e ) -approximation ratio on DAGs with a single source, a single sink or two sources.
      Citation: Algorithms
      PubDate: 2020-03-18
      DOI: 10.3390/a13030068
      Issue No: Vol. 13, No. 3 (2020)
       
  • Algorithms, Vol. 13, Pages 69: Oil Spill Monitoring of Shipborne Radar
           Image Features Using SVM and Local Adaptive Threshold

    • Authors: Jin Xu, Haixia Wang, Can Cui, Baigang Zhao, Bo Li
      First page: 69
      Abstract: In the case of marine accidents, monitoring marine oil spills can provide an important basis for identifying liabilities and assessing the damage. Shipborne radar can ensure large-scale, real-time monitoring, in all weather, with high-resolution. It therefore has the potential for broad applications in oil spill monitoring. Considering the original gray-scale image from the shipborne radar acquired in the case of the Dalian 7.16 oil spill accident, a complete oil spill detection method is proposed. Firstly, the co-frequency interferences and speckles in the original image are eliminated by preprocessing. Secondly, the wave information is classified using a support vector machine (SVM), and the effective wave monitoring area is generated according to the gray distribution matrix. Finally, oil spills are detected by a local adaptive threshold and displayed on an electronic chart based on geographic information system (GIS). The results show that the SVM can extract the effective wave information from the original shipborne radar image, and the local adaptive threshold method has strong applicability for oil film segmentation. This method can provide a technical basis for real-time cleaning and liability determination in oil spill accidents.
      Citation: Algorithms
      PubDate: 2020-03-21
      DOI: 10.3390/a13030069
      Issue No: Vol. 13, No. 3 (2020)
       
  • Algorithms, Vol. 13, Pages 70: Ensemble Learning of Hybrid Acoustic
           Features for Speech Emotion Recognition

    • Authors: Kudakwashe Zvarevashe, Oludayo Olugbara
      First page: 70
      Abstract: Automatic recognition of emotion is important for facilitating seamless interactivity between a human being and intelligent robot towards the full realization of a smart society. The methods of signal processing and machine learning are widely applied to recognize human emotions based on features extracted from facial images, video files or speech signals. However, these features were not able to recognize the fear emotion with the same level of precision as other emotions. The authors propose the agglutination of prosodic and spectral features from a group of carefully selected features to realize hybrid acoustic features for improving the task of emotion recognition. Experiments were performed to test the effectiveness of the proposed features extracted from speech files of two public databases and used to train five popular ensemble learning algorithms. Results show that random decision forest ensemble learning of the proposed hybrid acoustic features is highly effective for speech emotion recognition.
      Citation: Algorithms
      PubDate: 2020-03-22
      DOI: 10.3390/a13030070
      Issue No: Vol. 13, No. 3 (2020)
       
  • Algorithms, Vol. 13, Pages 71: Two-Step Classification with SVD
           Preprocessing of Distributed Massive Datasets in Apache Spark

    • Authors: Athanasios Alexopoulos, Georgios Drakopoulos, Andreas Kanavos, Phivos Mylonas, Gerasimos Vonitsanos
      First page: 71
      Abstract: At the dawn of the 10V or big data data era, there are a considerable number of sources such as smart phones, IoT devices, social media, smart city sensors, as well as the health care system, all of which constitute but a small portion of the data lakes feeding the entire big data ecosystem. This 10V data growth poses two primary challenges, namely storing and processing. Concerning the latter, new frameworks have been developed including distributed platforms such as the Hadoop ecosystem. Classification is a major machine learning task typically executed on distributed platforms and as a consequence many algorithmic techniques have been developed tailored for these platforms. This article extensively relies in two ways on classifiers implemented in MLlib, the main machine learning library for the Hadoop ecosystem. First, a vast number of classifiers is applied to two datasets, namely Higgs and PAMAP. Second, a two-step classification is ab ovo performed to the same datasets. Specifically, the singular value decomposition of the data matrix determines first a set of transformed attributes which in turn drive the classifiers of MLlib. The twofold purpose of the proposed architecture is to reduce complexity while maintaining a similar if not better level of the metrics of accuracy, recall, and F 1 . The intuition behind this approach stems from the engineering principle of breaking down complex problems to simpler and more manageable tasks. The experiments based on the same Spark cluster indicate that the proposed architecture outperforms the individual classifiers with respect to both complexity and the abovementioned metrics.
      Citation: Algorithms
      PubDate: 2020-03-24
      DOI: 10.3390/a13030071
      Issue No: Vol. 13, No. 3 (2020)
       
  • Algorithms, Vol. 13, Pages 72: Energy Efficient Routing in Wireless Sensor
           Networks: A Comprehensive Survey

    • Authors: Nakas, Kandris, Visvardis
      First page: 72
      Abstract: Wireless Sensor Networks (WSNs) are among the most emerging technologies, thanks to their great capabilities and their ever growing range of applications. However, the lifetime of WSNs is extremely restricted due to the delimited energy capacity of their sensor nodes. This is why energy conservation is considered as the most important research concern for WSNs. Radio communication is the utmost energy consuming function in a WSN. Thus, energy efficient routing is necessitated to save energy and thus prolong the lifetime of WSNs. For this reason, numerous protocols for energy efficient routing in WSNs have been proposed. This article offers an analytical and up to date survey on the protocols of this kind. The classic and modern protocols presented are categorized, depending on i) how the network is structured, ii) how data are exchanged, iii) whether location information is or not used, and iv) whether Quality of Service (QoS) or multiple paths are or not supported. In each distinct category, protocols are both described and compared in terms of specific performance metrics, while their advantages and disadvantages are discussed. Finally, the study findings are discussed, concluding remarks are drawn, and open research issues are indicated.
      Citation: Algorithms
      PubDate: 2020-03-24
      DOI: 10.3390/a13030072
      Issue No: Vol. 13, No. 3 (2020)
       
  • Algorithms, Vol. 13, Pages 73: Classical and Deep Learning Paradigms for
           Detection and Validation of Key Genes of Risky Outcomes of HCV

    • Authors: Abdel Samee
      First page: 73
      Abstract: Hepatitis C virus (HCV) is one of the most dangerous viruses worldwide. It is the foremost cause of the hepatic cirrhosis, and hepatocellular carcinoma, HCC. Detecting new key genes that play a role in the growth of HCC in HCV patients using machine learning techniques paves the way for producing accurate antivirals. In this work, there are two phases: detecting the up/downregulated genes using classical univariate and multivariate feature selection methods, and validating the retrieved list of genes using Insilico classifiers. However, the classification algorithms in the medical domain frequently suffer from a deficiency of training cases. Therefore, a deep neural network approach is proposed here to validate the significance of the retrieved genes in classifying the HCV-infected samples from the disinfected ones. The validation model is based on the artificial generation of new examples from the retrieved genes’ expressions using sparse autoencoders. Subsequently, the generated genes’ expressions data are used to train conventional classifiers. Our results in the first phase yielded a better retrieval of significant genes using Principal Component Analysis (PCA), a multivariate approach. The retrieved list of genes using PCA had a higher number of HCC biomarkers compared to the ones retrieved from the univariate methods. In the second phase, the classification accuracy can reveal the relevance of the extracted key genes in classifying the HCV-infected and disinfected samples.
      Citation: Algorithms
      PubDate: 2020-03-24
      DOI: 10.3390/a13030073
      Issue No: Vol. 13, No. 3 (2020)
       
  • Algorithms, Vol. 13, Pages 74: Bi-Objective Dynamic Multiprocessor Open
           Shop Scheduling: An Exact Algorithm

    • Authors: Tamer F. Abdelmaguid
      First page: 74
      Abstract: An important element in the integration of the fourth industrial revolution is the development of efficient algorithms to deal with dynamic scheduling problems. In dynamic scheduling, jobs can be admitted during the execution of a given schedule, which necessitates appropriately planned rescheduling decisions for maintaining a high level of performance. In this paper, a dynamic case of the multiprocessor open shop scheduling problem is addressed. This problem appears in different contexts, particularly those involving diagnostic operations in maintenance and health care industries. Two objectives are considered simultaneously—the minimization of the makespan and the minimization of the mean weighted flow time. The former objective aims to sustain efficient utilization of the available resources, while the latter objective helps in maintaining a high customer satisfaction level. An exact algorithm is presented for generating optimal Pareto front solutions. Despite the fact that the studied problem is NP-hard for both objectives, the presented algorithm can be used to solve small instances. This is demonstrated through computational experiments on a testbed of 30 randomly generated instances. The presented algorithm can also be used to generate approximate Pareto front solutions in case computational time needed to find proven optimal solutions for generated sub-problems is found to be excessive. Furthermore, computational results are used to investigate the characteristics of the optimal Pareto front of the studied problem. Accordingly, some insights for future metaheuristic developments are drawn.
      Citation: Algorithms
      PubDate: 2020-03-24
      DOI: 10.3390/a13030074
      Issue No: Vol. 13, No. 3 (2020)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.234.241.200
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-