Subjects -> MATHEMATICS (Total: 1100 journals)
    - APPLIED MATHEMATICS (88 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (812 journals)
    - MATHEMATICS (GENERAL) (43 journals)
    - NUMERICAL ANALYSIS (24 journals)
    - PROBABILITIES AND MATH STATISTICS (110 journals)

MATHEMATICS (812 journals)                  1 2 3 4 5 | Last

Showing 1 - 200 of 538 Journals sorted alphabetically
Abakós     Open Access   (Followers: 5)
Abhandlungen aus dem Mathematischen Seminar der Universitat Hamburg     Hybrid Journal   (Followers: 4)
Academic Voices : A Multidisciplinary Journal     Open Access   (Followers: 2)
Accounting Perspectives     Full-text available via subscription   (Followers: 7)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 16)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 3)
ACM Transactions on Mathematical Software (TOMS)     Hybrid Journal   (Followers: 6)
ACS Applied Materials & Interfaces     Hybrid Journal   (Followers: 40)
Acta Applicandae Mathematicae     Hybrid Journal   (Followers: 1)
Acta Mathematica     Hybrid Journal   (Followers: 12)
Acta Mathematica Hungarica     Hybrid Journal   (Followers: 2)
Acta Mathematica Scientia     Full-text available via subscription   (Followers: 5)
Acta Mathematica Sinica, English Series     Hybrid Journal   (Followers: 6)
Acta Mathematica Vietnamica     Hybrid Journal  
Acta Mathematicae Applicatae Sinica, English Series     Hybrid Journal  
Advanced Science Letters     Full-text available via subscription   (Followers: 12)
Advances in Applied Clifford Algebras     Hybrid Journal   (Followers: 4)
Advances in Calculus of Variations     Hybrid Journal   (Followers: 6)
Advances in Catalysis     Full-text available via subscription   (Followers: 5)
Advances in Complex Systems     Hybrid Journal   (Followers: 10)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 23)
Advances in Decision Sciences     Open Access   (Followers: 4)
Advances in Difference Equations     Open Access   (Followers: 3)
Advances in Fixed Point Theory     Open Access   (Followers: 8)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 19)
Advances in Linear Algebra & Matrix Theory     Open Access   (Followers: 11)
Advances in Materials Science     Open Access   (Followers: 19)
Advances in Mathematical Physics     Open Access   (Followers: 8)
Advances in Mathematics     Full-text available via subscription   (Followers: 17)
Advances in Nonlinear Analysis     Open Access   (Followers: 1)
Advances in Numerical Analysis     Open Access   (Followers: 9)
Advances in Operations Research     Open Access   (Followers: 13)
Advances in Operator Theory     Hybrid Journal   (Followers: 4)
Advances in Porous Media     Full-text available via subscription   (Followers: 5)
Advances in Pure and Applied Mathematics     Hybrid Journal   (Followers: 10)
Advances in Pure Mathematics     Open Access   (Followers: 11)
Advances in Science and Research (ASR)     Open Access   (Followers: 9)
Aequationes Mathematicae     Hybrid Journal   (Followers: 2)
African Journal of Educational Studies in Mathematics and Sciences     Full-text available via subscription   (Followers: 9)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 7)
Afrika Matematika     Hybrid Journal   (Followers: 3)
Air, Soil & Water Research     Open Access   (Followers: 13)
AKSIOMA Journal of Mathematics Education     Open Access   (Followers: 3)
AKSIOMATIK : Jurnal Penelitian Pendidikan dan Pembelajaran Matematika     Open Access   (Followers: 1)
Al-Jabar : Jurnal Pendidikan Matematika     Open Access   (Followers: 1)
Al-Qadisiyah Journal for Computer Science and Mathematics     Open Access   (Followers: 1)
AL-Rafidain Journal of Computer Sciences and Mathematics     Open Access   (Followers: 6)
Algebra and Logic     Hybrid Journal   (Followers: 7)
Algebra Colloquium     Hybrid Journal   (Followers: 4)
Algebra Universalis     Hybrid Journal   (Followers: 2)
Algorithmic Operations Research     Open Access   (Followers: 5)
Algorithms     Open Access   (Followers: 12)
Algorithms Research     Open Access   (Followers: 1)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 10)
American Journal of Mathematical Analysis     Open Access   (Followers: 2)
American Journal of Mathematical and Management Sciences     Hybrid Journal   (Followers: 1)
American Journal of Mathematics     Full-text available via subscription   (Followers: 7)
American Journal of Operations Research     Open Access   (Followers: 8)
American Mathematical Monthly     Full-text available via subscription   (Followers: 6)
An International Journal of Optimization and Control: Theories & Applications     Open Access   (Followers: 11)
Anadol University Journal of Science and Technology B : Theoritical Sciences     Open Access  
Analele Universitatii Ovidius Constanta - Seria Matematica     Open Access  
Analysis and Applications     Hybrid Journal   (Followers: 1)
Analysis and Mathematical Physics     Hybrid Journal   (Followers: 6)
Analysis Mathematica     Full-text available via subscription  
Analysis. International mathematical journal of analysis and its applications     Hybrid Journal   (Followers: 5)
Anargya : Jurnal Ilmiah Pendidikan Matematika     Open Access   (Followers: 7)
Annales Mathematicae Silesianae     Open Access   (Followers: 2)
Annales mathématiques du Québec     Hybrid Journal   (Followers: 4)
Annales Universitatis Mariae Curie-Sklodowska, sectio A – Mathematica     Open Access   (Followers: 1)
Annales Universitatis Paedagogicae Cracoviensis. Studia Mathematica     Open Access  
Annali di Matematica Pura ed Applicata     Hybrid Journal   (Followers: 1)
Annals of Combinatorics     Hybrid Journal   (Followers: 4)
Annals of Data Science     Hybrid Journal   (Followers: 13)
Annals of Discrete Mathematics     Full-text available via subscription   (Followers: 8)
Annals of Functional Analysis     Hybrid Journal   (Followers: 4)
Annals of Mathematics     Full-text available via subscription   (Followers: 2)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 14)
Annals of PDE     Hybrid Journal  
Annals of Pure and Applied Logic     Open Access   (Followers: 4)
Annals of the Alexandru Ioan Cuza University - Mathematics     Open Access  
Annals of the Institute of Statistical Mathematics     Hybrid Journal   (Followers: 1)
Annals of West University of Timisoara - Mathematics     Open Access  
Annals of West University of Timisoara - Mathematics and Computer Science     Open Access   (Followers: 2)
Annuaire du Collège de France     Open Access   (Followers: 6)
ANZIAM Journal     Open Access   (Followers: 1)
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 3)
Applications of Mathematics     Hybrid Journal   (Followers: 3)
Applied Categorical Structures     Hybrid Journal   (Followers: 4)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 14)
Applied Mathematics     Open Access   (Followers: 4)
Applied Mathematics     Open Access   (Followers: 8)
Applied Mathematics & Optimization     Hybrid Journal   (Followers: 10)
Applied Mathematics - A Journal of Chinese Universities     Hybrid Journal   (Followers: 1)
Applied Mathematics and Nonlinear Sciences     Open Access  
Applied Mathematics Letters     Full-text available via subscription   (Followers: 4)
Applied Mathematics Research eXpress     Hybrid Journal   (Followers: 1)
Applied Network Science     Open Access   (Followers: 3)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 5)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 6)
Arab Journal of Mathematical Sciences     Open Access   (Followers: 4)
Arabian Journal of Mathematics     Open Access   (Followers: 2)
Archive for Mathematical Logic     Hybrid Journal   (Followers: 3)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 6)
Archive of Numerical Software     Open Access  
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 6)
Arkiv för Matematik     Hybrid Journal   (Followers: 1)
Armenian Journal of Mathematics     Open Access   (Followers: 1)
Arnold Mathematical Journal     Hybrid Journal   (Followers: 1)
Artificial Satellites     Open Access   (Followers: 24)
Asia-Pacific Journal of Operational Research     Hybrid Journal   (Followers: 3)
Asian Journal of Algebra     Open Access   (Followers: 1)
Asian Research Journal of Mathematics     Open Access   (Followers: 1)
Asian-European Journal of Mathematics     Hybrid Journal   (Followers: 3)
Australian Mathematics Teacher, The     Full-text available via subscription   (Followers: 7)
Australian Primary Mathematics Classroom     Full-text available via subscription   (Followers: 5)
Australian Senior Mathematics Journal     Full-text available via subscription   (Followers: 2)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Axioms     Open Access   (Followers: 1)
Baltic International Yearbook of Cognition, Logic and Communication     Open Access   (Followers: 2)
Banach Journal of Mathematical Analysis     Hybrid Journal   (Followers: 2)
Basin Research     Hybrid Journal   (Followers: 5)
BIBECHANA     Open Access   (Followers: 2)
Biomath     Open Access  
BIT Numerical Mathematics     Hybrid Journal   (Followers: 1)
Boletim Cearense de Educação e História da Matemática     Open Access  
Boletim de Educação Matemática     Open Access  
Boletín de la Sociedad Matemática Mexicana     Hybrid Journal  
Bollettino dell'Unione Matematica Italiana     Full-text available via subscription   (Followers: 3)
British Journal of Mathematical and Statistical Psychology     Full-text available via subscription   (Followers: 18)
Bruno Pini Mathematical Analysis Seminar     Open Access  
Buletinul Academiei de Stiinte a Republicii Moldova. Matematica     Open Access   (Followers: 13)
Bulletin des Sciences Mathamatiques     Full-text available via subscription   (Followers: 4)
Bulletin of Dnipropetrovsk University. Series : Communications in Mathematical Modeling and Differential Equations Theory     Open Access   (Followers: 3)
Bulletin of Mathematical Sciences     Open Access   (Followers: 1)
Bulletin of Symbolic Logic     Full-text available via subscription   (Followers: 2)
Bulletin of the Australian Mathematical Society     Full-text available via subscription   (Followers: 2)
Bulletin of the Brazilian Mathematical Society, New Series     Hybrid Journal  
Bulletin of the Iranian Mathematical Society     Hybrid Journal  
Bulletin of the London Mathematical Society     Hybrid Journal   (Followers: 3)
Bulletin of the Malaysian Mathematical Sciences Society     Hybrid Journal  
Cadernos do IME : Série Matemática     Open Access   (Followers: 1)
Calculus of Variations and Partial Differential Equations     Hybrid Journal  
Canadian Journal of Mathematics / Journal canadien de mathématiques     Hybrid Journal  
Canadian Journal of Science, Mathematics and Technology Education     Hybrid Journal   (Followers: 22)
Canadian Mathematical Bulletin     Hybrid Journal  
Carpathian Mathematical Publications     Open Access   (Followers: 1)
Catalysis in Industry     Hybrid Journal   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 3)
CHANCE     Hybrid Journal   (Followers: 5)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 3)
Chaos, Solitons & Fractals : X     Open Access  
ChemSusChem     Hybrid Journal   (Followers: 8)
Chinese Annals of Mathematics, Series B     Hybrid Journal  
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
Chinese Journal of Mathematics     Open Access  
Ciencia     Open Access   (Followers: 1)
Clean Air Journal     Full-text available via subscription   (Followers: 1)
CODEE Journal     Open Access   (Followers: 3)
Cogent Mathematics     Open Access   (Followers: 2)
Cognitive Computation     Hybrid Journal   (Followers: 3)
Collectanea Mathematica     Hybrid Journal  
College Mathematics Journal     Hybrid Journal   (Followers: 4)
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 4)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 15)
Commentarii Mathematici Helvetici     Hybrid Journal  
Communications in Advanced Mathematical Sciences     Open Access  
Communications in Combinatorics and Optimization     Open Access  
Communications in Contemporary Mathematics     Hybrid Journal  
Communications in Mathematical Physics     Hybrid Journal   (Followers: 4)
Communications On Pure & Applied Mathematics     Hybrid Journal   (Followers: 4)
Complex Analysis and its Synergies     Open Access   (Followers: 3)
Complex Variables and Elliptic Equations: An International Journal     Hybrid Journal  
Composite Materials Series     Full-text available via subscription   (Followers: 9)
Compositio Mathematica     Full-text available via subscription  
Comptes Rendus Mathematique     Full-text available via subscription  
Computational and Applied Mathematics     Hybrid Journal   (Followers: 4)
Computational and Mathematical Methods     Hybrid Journal  
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 2)
Computational Complexity     Hybrid Journal   (Followers: 4)
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 9)
Computational Mechanics     Hybrid Journal   (Followers: 5)
Computational Methods and Function Theory     Hybrid Journal  
Computational Optimization and Applications     Hybrid Journal   (Followers: 9)
Computers & Mathematics with Applications     Full-text available via subscription   (Followers: 11)
Concrete Operators     Open Access   (Followers: 4)
Confluentes Mathematici     Hybrid Journal  
Contributions to Discrete Mathematics     Open Access   (Followers: 2)
Contributions to Game Theory and Management     Open Access  
COSMOS     Hybrid Journal  
Cryptography and Communications     Hybrid Journal   (Followers: 13)
Cuadernos de Investigación y Formación en Educación Matemática     Open Access  
Cubo. A Mathematical Journal     Open Access  
Current Research in Biostatistics     Open Access   (Followers: 8)
Czechoslovak Mathematical Journal     Hybrid Journal   (Followers: 1)
Daya Matematis : Jurnal Inovasi Pendidikan Matematika     Open Access   (Followers: 2)
Demographic Research     Open Access   (Followers: 15)
Demonstratio Mathematica     Open Access  

        1 2 3 4 5 | Last

Similar Journals
Journal Cover
Algorithms
Journal Prestige (SJR): 0.217
Citation Impact (citeScore): 1
Number of Followers: 12  

  This is an Open Access Journal Open Access journal
ISSN (Print) 1999-4893
Published by MDPI Homepage  [231 journals]
  • Algorithms, Vol. 13, Pages 176: Towards Cognitive Recommender Systems

    • Authors: Amin Beheshti, Shahpar Yakhchi, Salman Mousaeirad, Seyed Mohssen Ghafari, Srinivasa Reddy Goluguri, Mohammad Amin Edrisi
      First page: 176
      Abstract: Intelligence is the ability to learn from experience and use domain experts’ knowledge to adapt to new situations. In this context, an intelligent Recommender System should be able to learn from domain experts’ knowledge and experience, as it is vital to know the domain that the items will be recommended. Traditionally, Recommender Systems have been recognized as playlist generators for video/music services (e.g., Netflix and Spotify), e-commerce product recommenders (e.g., Amazon and eBay), or social content recommenders (e.g., Facebook and Twitter). However, Recommender Systems in modern enterprises are highly data-/knowledge-driven and may rely on users’ cognitive aspects such as personality, behavior, and attitude. In this paper, we survey and summarize previously published studies on Recommender Systems to help readers understand our method’s contributions to the field in this context. We discuss the current limitations of the state of the art approaches in Recommender Systems and the need for our new approach: A vision and a general framework for a new type of data-driven, knowledge-driven, and cognition-driven Recommender Systems, namely, Cognitive Recommender Systems. Cognitive Recommender Systems will be the new type of intelligent Recommender Systems that understand the user’s preferences, detect changes in user preferences over time, predict user’s unknown favorites, and explore adaptive mechanisms to enable intelligent actions within the compound and changing environments. We present a motivating scenario in banking and argue that existing Recommender Systems: (i) do not use domain experts’ knowledge to adapt to new situations; (ii) may not be able to predict the ratings or preferences a customer would give to a product (e.g., loan, deposit, or trust service); and (iii) do not support data capture and analytics around customers’ cognitive activities and use it to provide intelligent and time-aware recommendations.
      Citation: Algorithms
      PubDate: 2020-07-22
      DOI: 10.3390/a13080176
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 177: Sphere Fitting with Applications to
           Machine Tracking

    • Authors: Dror Epstein, Dan Feldman
      First page: 177
      Abstract: We suggest a provable and practical approximation algorithm for fitting a set P of n points in R d to a sphere. Here, a sphere is represented by its center x ∈ R d and radius r > 0 . The goal is to minimize the sum ∑ p ∈ P ∣ p − x − r ∣ of distances to the points up to a multiplicative factor of 1 ± ε , for a given constant ε > 0 , over every such r and x. Our main technical result is a data summarization of the input set, called coreset, that approximates the above sum of distances on the original (big) set P for every sphere. Then, an accurate sphere can be extracted quickly via an inefficient exhaustive search from the small coreset. Most articles focus mainly on sphere identification (e.g., circles in 2 D image) rather than finding the exact match (in the sense of extent measures), and do not provide approximation guarantees. We implement our algorithm and provide extensive experimental results on both synthetic and real-world data. We then combine our algorithm in a mechanical pressure control system whose main bottleneck is tracking a falling ball. Full open source is also provided.
      Citation: Algorithms
      PubDate: 2020-07-22
      DOI: 10.3390/a13080177
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 178: The Model Order Reduction Method as an
           Effective Way to Implement GPC Controller for Multidimensional Objects

    • Authors: Plamowski, Kephart
      First page: 178
      Abstract: The paper addresses issues associated with implementing GPC controllers in systems with multiple input signals. Depending on the method of identification, the resulting models may be of a high order and when applied to a control/regulation law, may result in numerical errors due to the limitations of representing values in double-precision floating point numbers. This phenomenon is to be avoided, because even if the model is correct, the resulting numerical errors will lead to poor control performance. An effective way to identify, and at the same time eliminate, this unfavorable feature is to reduce the model order. A method of model order reduction is presented in this paper that effectively mitigates these issues. In this paper, the Generalized Predictive Control (GPC) algorithm is presented, followed by a discussion of the conditions that result in high order models. Examples are included where the discussed problem is demonstrated along with the subsequent results after the reduction. The obtained results and formulated conclusions are valuable for industry practitioners who implement a predictive control in industry.
      Citation: Algorithms
      PubDate: 2020-07-23
      DOI: 10.3390/a13080178
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 179: Two-Component Bayesian Hierarchical Models
           for Cost-Benefit Analysis of Traffic Barrier Crash Count

    • Authors: Mahdi Rezapour, Khaled Ksaibati
      First page: 179
      Abstract: Road departure crashes tend to be hazardous, especially in rural areas like Wyoming. Traffic barriers could be installed to mitigate the severity of those crashes. However, the severity of traffic barriers crashes still persists. Besides various drivers and environmental characteristics, the roadways and barrier geometric characteristics play a critical role in the severity of barrier crashes. The Wyoming department of transportation (WYDOT) has initiated a project to identify and optimize the heights of those barriers that are below the design standard, while prioritizing them based on the monetary benefit. This is to optimize first barriers that need an immediate attention, considering the limited budget, and then all other barriers being under design. In order to account for both aspects of frequency and severity of crashes, equivalent property damage only (EPDO) was considered. The data of this type besides having an over-dispersion, exhibits excess amounts of zeroes. Thus, a two-component model was employed to provide a flexible way of addressing this problem. Beside this technique, one-component hierarchical modeling approach was considered for a comparison purpose. This paper presents an empirical cost-benefit analysis based on Bayesian hierarchical machine learning techniques. After identifying the best model in terms of the performance, deviance information criterion (DIC), the results were converted into an equation, and the equation was used for a purpose of machine learning technique. An automated method generated cost based on barriers’ current conditions, and then based on optimized barrier heights. The empirical analysis showed that cost-sensitive modeling and machine learning technique deployment could be used as an effective way for cost-benefit analysis. That could be achieved through measuring the associated costs of barriers’ enhancements, added benefits over years and consequently, barrier prioritization due to lack of available budget. A comprehensive discussion across the two-component models, zero-inflated and hurdle, is included in the manuscript.
      Citation: Algorithms
      PubDate: 2020-07-23
      DOI: 10.3390/a13080179
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 180: A Predictive Analysis on Emerging
           Technology Utilization in Industrialized Construction in the United States
           and China

    • Authors: Bing Qi, Shuyu Qian, Aaron Costin
      First page: 180
      Abstract: Considering the increasing use of emerging technologies in industrialized construction in recent years, the primary objective of this article is to develop and validate predictive models to predict the emerging technology utilization level of industrialized construction industry practitioners. Our preliminary research results indicate that the company background and personal career profiles can significantly affect practitioners’ technology utilization level. Thus, our prediction model is based on four variables: company size, company type, working experience, and working position. The United States and China are selected as the case studies to validate the prediction model. First, a well-designed questionnaire survey is distributed to the industrialized construction industry practitioners from the two countries, which leads to 81 and 99 valid responses separately. Then, ordinal logistic regression is used to develop a set of models to predict the practitioners’ utilization level of the four main technology types. Finally, the external test dataset consisting of 16 cases indicates the prediction models have a high accuracy. The results also reflect some differences of the technology utilization status in the industrialized construction industry between the United States and China. The major contribution of this research is offering an efficient and accurate method to predict practitioners’ technology utilization level in industrialized construction. Significantly, the models are believed to have a wide application in promoting the emerging technologies in the actual industrialized construction.
      Citation: Algorithms
      PubDate: 2020-07-24
      DOI: 10.3390/a13080180
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 181: On the Optimal Calculation of the Rice
           Coding Parameter

    • Authors: Fernando Solano Solano Donado
      First page: 181
      Abstract: In this article, we design and evaluate several algorithms for the computation of the optimal Rice coding parameter. We conjecture that the optimal Rice coding parameter can be bounded and verify this conjecture through numerical experiments using real data. We also describe algorithms that partition the input sequence of data into sub-sequences, such that if each sub-sequence is coded with a different Rice parameter, the overall code length is minimised. An algorithm for finding the optimal partitioning solution for Rice codes is proposed, as well as fast heuristics, based on the understanding of the problem trade-offs.
      Citation: Algorithms
      PubDate: 2020-07-27
      DOI: 10.3390/a13080181
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 182: Trajectory Clustering and k-NN for Robust
           Privacy Preserving k-NN Query Processing in GeoSpark

    • Authors: Elias Dritsas, Andreas Kanavos, Maria Trigka, Gerasimos Vonitsanos, Spyros Sioutas, Athanasios Tsakalidis
      First page: 182
      Abstract: Privacy Preserving and Anonymity have gained significant concern from the big data perspective. We have the view that the forthcoming frameworks and theories will establish several solutions for privacy protection. The k-anonymity is considered a key solution that has been widely employed to prevent data re-identifcation and concerns us in the context of this work. Data modeling has also gained significant attention from the big data perspective. It is believed that the advancing distributed environments will provide users with several solutions for efficient spatio-temporal data management. GeoSpark will be utilized in the current work as it is a key solution that has been widely employed for spatial data. Specifically, it works on the top of Apache Spark, the main framework leveraged from the research community and organizations for big data transformation, processing and visualization. To this end, we focused on trajectory data representation so as to be applicable to the GeoSpark environment, and a GeoSpark-based approach is designed for the efficient management of real spatio-temporal data. Th next step is to gain deeper understanding of the data through the application of k nearest neighbor (k-NN) queries either using indexing methods or otherwise. The k-anonymity set computation, which is the main component for privacy preservation evaluation and the main issue of our previous works, is evaluated in the GeoSpark environment. More to the point, the focus here is on the time cost of k-anonymity set computation along with vulnerability measurement. The extracted results are presented into tables and figures for visual inspection.
      Citation: Algorithms
      PubDate: 2020-07-28
      DOI: 10.3390/a13080182
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 183: Influence Maximization with Priority in
           Online Social Networks

    • Authors: Canh V. Pham, Dung K. T. Ha, Quang C. Vu, Anh N. Su, Huan X. Hoang
      First page: 183
      Abstract: The Influence Maximization (IM) problem, which finds a set of k nodes (called seedset) in a social network to initiate the influence spread so that the number of influenced nodes after propagation process is maximized, is an important problem in information propagation and social network analysis. However, previous studies ignored the constraint of priority that led to inefficient seed collections. In some real situations, companies or organizations often prioritize influencing potential users during their influence diffusion campaigns. With a new approach to these existing works, we propose a new problem called Influence Maximization with Priority (IMP) which finds out a set seed of k nodes in a social network to be able to influence the largest number of nodes subject to the influence spread to a specific set of nodes U (called priority set) at least a given threshold T in this paper. We show that the problem is NP-hard under well-known IC model. To find the solution, we propose two efficient algorithms, called Integrated Greedy (IG) and Integrated Greedy Sampling (IGS) with provable theoretical guarantees. IG provides a 1−(1−1k)t-approximation solution with t is an outcome of algorithm and t≥1. The worst-case approximation ratio is obtained when t=1 and it is equal to 1/k. In addition, IGS is an efficient randomized approximation algorithm based on sampling method that provides a 1−(1−1k)t−ϵ-approximation solution with probability at least 1−δ with ϵ>0,δ∈(0,1) as input parameters of the problem. We conduct extensive experiments on various real networks to compare our IGS algorithm to the state-of-the-art algorithms in IM problem. The results indicate that our algorithm provides better solutions interns of influence on the priority sets when approximately give twice to ten times higher than threshold T while running time, memory usage and the influence spread also give considerable results compared to the others.
      Citation: Algorithms
      PubDate: 2020-07-29
      DOI: 10.3390/a13080183
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 184: Iterative Algorithm for Solving Scalar
           Fractional Differential Equations with Riemann–Liouville Derivative and
           Supremum

    • Authors: Ravi Agarwal, Snezhana Hristova, Donal O’Regan, Kremena Stefanova
      First page: 184
      Abstract: The initial value problem for a special type of scalar nonlinear fractional differential equation with a Riemann–Liouville fractional derivative is studied. The main characteristic of the equation is the presence of the supremum of the unknown function over a previous time interval. This type of equation is difficult to be solved explicitly and we need approximate methods for its solving. In this paper, initially, mild lower and mild upper solutions are defined. Then, based on these definitions and the application of the monotone-iterative technique, we present an algorithm for constructing two types of successive approximations. Both sequences are monotonically convergent from above and from below, respectively, to the mild solutions of the given problem. The suggested iterative scheme is applied to particular problems to illustrate its application.
      Citation: Algorithms
      PubDate: 2020-07-30
      DOI: 10.3390/a13080184
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 185: Machine Learning-Guided Dual Heuristics
           and New Lower Bounds for the Refueling and Maintenance Planning Problem of
           Nuclear Power Plants

    • Authors: Nicolas Dupin, El-Ghazali Talbi
      First page: 185
      Abstract: This paper studies the hybridization of Mixed Integer Programming (MIP) with dual heuristics and machine learning techniques, to provide dual bounds for a large scale optimization problem from an industrial application. The case study is the EURO/ROADEF Challenge 2010, to optimize the refueling and maintenance planning of nuclear power plants. Several MIP relaxations are presented to provide dual bounds computing smaller MIPs than the original problem. It is proven how to get dual bounds with scenario decomposition in the different 2-stage programming MILP formulations, with a selection of scenario guided by machine learning techniques. Several sets of dual bounds are computable, improving significantly the former best dual bounds of the literature and justifying the quality of the best primal solution known.
      Citation: Algorithms
      PubDate: 2020-07-30
      DOI: 10.3390/a13080185
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 186: A Review on Recent Advancements in FOREX
           Currency Prediction

    • Authors: Md. Saiful Islam, Emam Hossain, Abdur Rahman, Mohammad Shahadat Hossain, Karl Andersson
      First page: 186
      Abstract: In recent years, the foreign exchange (FOREX) market has attracted quite a lot of scrutiny from researchers all over the world. Due to its vulnerable characteristics, different types of research have been conducted to accomplish the task of predicting future FOREX currency prices accurately. In this research, we present a comprehensive review of the recent advancements of FOREX currency prediction approaches. Besides, we provide some information about the FOREX market and cryptocurrency market. We wanted to analyze the most recent works in this field and therefore considered only those papers which were published from 2017 to 2019. We used a keyword-based searching technique to filter out popular and relevant research. Moreover, we have applied a selection algorithm to determine which papers to include in this review. Based on our selection criteria, we have reviewed 39 research articles that were published on “Elsevier”, “Springer”, and “IEEE Xplore” that predicted future FOREX prices within the stipulated time. Our research shows that in recent years, researchers have been interested mostly in neural networks models, pattern-based approaches, and optimization techniques. Our review also shows that many deep learning algorithms, such as gated recurrent unit (GRU) and long short term memory (LSTM), have been fully explored and show huge potential in time series prediction.
      Citation: Algorithms
      PubDate: 2020-07-30
      DOI: 10.3390/a13080186
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 187: Constructing Reliable Computing
           Environments on Top of Amazon EC2 Spot Instances

    • Authors: Altino M. Sampaio, Jorge G. Barbosa 
      First page: 187
      Abstract: Cloud provider Amazon Elastic Compute Cloud (EC2) gives access to resources in the form of virtual servers, also known as instances. EC2 spot instances (SIs) offer spare computational capacity at steep discounts compared to reliable and fixed price on-demand instances. The drawback, however, is that the delay in acquiring spots can be incredible high. Moreover, SIs may not always be available as they can be reclaimed by EC2 at any given time, with a two-minute interruption notice. In this paper, we propose a multi-workflow scheduling algorithm, allied with a container migration-based mechanism, to dynamically construct and readjust virtual clusters on top of non-reserved EC2 pricing model instances. Our solution leverages recent findings on performance and behavior characteristics of EC2 spots. We conducted simulations by submitting real-life workflow applications, constrained by user-defined deadline and budget quality of service (QoS) parameters. The results indicate that our solution improves the rate of completed tasks by almost 20%, and the rate of completed workflows by at least 30%, compared with other state-of-the-art algorithms, for a worse-case scenario.
      Citation: Algorithms
      PubDate: 2020-07-31
      DOI: 10.3390/a13080187
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 188: Application of the Reed-Solomon Algorithm
           as a Remote Sensing Data Fusion Tool for Land Use Studies

    • Authors: Piotr A. Werner
      First page: 188
      Abstract: The Reed-Solomon algorithm is well known in different fields of computer science. The novelty of this study lies in the different interpretation of the algorithm itself and its scope of application for remote sensing, especially at the preparatory stage, i.e., data fusion. A short review of the attempts to use different data fusion approaches in geospatial technologies explains the possible usage of the algorithm. The rationale behind its application for data fusion is to include all possible information from all acquired spectral bands, assuming that complete composite information in the form of one compound image will improve both the quality of visualization and some aspects of further quantitative and qualitative analyses. The concept arose from an empirical, heuristic combination of geographic information systems (GIS), map algebra, and two-dimensional cellular automata. The challenges are related to handling big quantitative data sets and the awareness that these numbers are in fact descriptors of a real-world multidimensional view. An empirical case study makes it easier to understand the operationalization of the Reed-Solomon algorithm for land use studies.
      Citation: Algorithms
      PubDate: 2020-08-03
      DOI: 10.3390/a13080188
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 189: Node Placement Optimization of Wireless
           Sensor Networks Using Multi-Objective Adaptive Degressive Ary Number
           Encoded Genetic Algorithm

    • Authors: Yijie Zhang, Mandan Liu
      First page: 189
      Abstract: The wireless sensor network (WSN) has the advantages of low cost, high monitoring accuracy, good fault tolerance, remote monitoring and convenient maintenance. It has been widely used in various fields. In the WSN, the placement of node sensors has a great impact on its coverage, energy consumption and some other factors. In order to improve the convergence speed of a node placement optimization algorithm, the encoding method is improved in this paper. The degressive ary number encoding is further extended to a multi-objective optimization problem. Furthermore, the adaptive changing rule of ary number is proposed by analyzing the experimental results of the N-ary number encoded algorithm. Then a multi-objective optimization algorithm adopting the adaptive degressive ary number encoding method has been used in optimizing the node placement in WSN. The experiments show that the proposed adaptive degressive ary number encoded algorithm can improve both the optimization effect and search efficiency when solving the node placement problem.
      Citation: Algorithms
      PubDate: 2020-08-03
      DOI: 10.3390/a13080189
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 190: On a Nonsmooth Gauss–Newton Algorithms
           for Solving Nonlinear Complementarity Problems

    • Authors: Marek J. Śmietański
      First page: 190
      Abstract: In this paper, we propose a new version of the generalized damped Gauss–Newton method for solving nonlinear complementarity problems based on the transformation to the nonsmooth equation, which is equivalent to some unconstrained optimization problem. The B-differential plays the role of the derivative. We present two types of algorithms (usual and inexact), which have superlinear and global convergence for semismooth cases. These results can be applied to efficiently find all solutions of the nonlinear complementarity problems under some mild assumptions. The results of the numerical tests are attached as a complement of the theoretical considerations.
      Citation: Algorithms
      PubDate: 2020-08-04
      DOI: 10.3390/a13080190
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 191: Faster Algorithms for Mining Shortest-Path
           Distances from Massive Time-Evolving Graphs

    • Authors: Mattia D’Emidio
      First page: 191
      Abstract: Computing shortest-path distances is a fundamental primitive in the context of graph data mining, since this kind of information is essential in a broad range of prominent applications, which include social network analysis, data routing, web search optimization, database design and route planning. Standard algorithms for shortest paths (e.g., Dijkstra’s) do not scale well with the graph size, as they take more than a second or huge memory overheads to answer a single query on the distance for large-scale graph datasets. Hence, they are not suited to mine distances from big graphs, which are becoming the norm in most modern application contexts. Therefore, to achieve faster query answering, smarter and more scalable methods have been designed, the most effective of them based on precomputing and querying a compact representation of the transitive closure of the input graph, called the 2-hop-cover labeling. To use such approaches in realistic time-evolving scenarios, when the managed graph undergoes topological modifications over time, specific dynamic algorithms, carefully updating the labeling as the graph evolves, have been introduced. In fact, recomputing from scratch the 2-hop-cover structure every time the graph changes is not an option, as it induces unsustainable time overheads. While the state-of-the-art dynamic algorithm to update a 2-hop-cover labeling against incremental modifications (insertions of arcs/vertices, arc weights decreases) offers very fast update times, the only known solution for decremental modifications (deletions of arcs/vertices, arc weights increases) is still far from being considered practical, as it requires up to tens of seconds of processing per update in several prominent classes of real-world inputs, as experimentation shows. In this paper, we introduce a new dynamic algorithm to update 2-hop-cover labelings against decremental changes. We prove its correctness, formally analyze its worst-case performance, and assess its effectiveness through an experimental evaluation employing both real-world and synthetic inputs. Our results show that it improves, by up to several orders of magnitude, upon average update times of the only existing decremental algorithm, thus representing a step forward towards real-time distance mining in general, massive time-evolving graphs.
      Citation: Algorithms
      PubDate: 2020-08-04
      DOI: 10.3390/a13080191
      Issue No: Vol. 13, No. 8 (2020)
       
  • Algorithms, Vol. 13, Pages 155: Embedded Bayesian Network Contribution for
           a Safe Missing Planning of Autonomous Vehicles

    • Authors: Catherine Dezan, Sara Zermani, Chabha Hireche
      First page: 155
      Abstract: Bayesian Networks (BN) are probabilistic models that are commonly used for the diagnosis in numerous domains (medicine, finance, transport, robotics, …). In the case of autonomous vehicles, they can contribute to elaborate intelligent monitors that can take the environmental context into account. We show in this paper some main abilities of BN that can help in the elaboration of fault detection isolation and recovery (FDIR) modules. One of the main difficulty with the BN model is generally to elaborate these ones according to the case of study. Then, we propose some automatic generation techniques from failure mode and effects analysis (FMEA)-like tables using the pattern design approach. Once defined, these modules have to operate online for autonomous vehicles. In a second part, we propose a design methodology to embed the real-time and non-intrusive implementations of the BN modules using FPGA-SoC support. We show that the FPGA implementation can offer an interesting speed-up with very limited energy cost. Lastly, we show how these BN modules can be incorporated into the decision-making model for the mission planning of unmanned aerial vehicles (UAVs). We illustrate the integration by means of two models: the Decision Network model that is a straightforward extension of the BN model, and the BFM model that is an extension of the Markov Decision Process (MDP) decision-making model incorporating a BN. We illustrate the different proposals with realistic examples and show that the hybrid implementation on FPGA-SoC can offer some benefits.
      Citation: Algorithms
      PubDate: 2020-06-28
      DOI: 10.3390/a13070155
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 156: Generalized Polynomial Chaos Expansion for
           Fast and Accurate Uncertainty Quantification in Geomechanical Modelling

    • Authors: Claudia Zoccarato, Laura Gazzola, Massimiliano Ferronato, Pietro Teatini
      First page: 156
      Abstract: Geomechanical modelling of the processes associated to the exploitation of subsurface resources, such as land subsidence or triggered/induced seismicity, is a common practice of major interest. The prediction reliability depends on different sources of uncertainty, such as the parameterization of the constitutive model characterizing the deep rock behaviour. In this study, we focus on a Sobol’-based sensitivity analysis and uncertainty reduction via assimilation of land deformations. A synthetic test case application on a deep hydrocarbon reservoir is considered, where land settlements are predicted with the aid of a 3-D Finite Element (FE) model. Data assimilation is performed via the Ensemble Smoother (ES) technique and its variation in the form of Multiple Data Assimilation (ES-MDA). However, the ES convergence is guaranteed with a large number of Monte Carlo (MC) simulations, that may be computationally infeasible in large scale and complex systems. For this reason, a surrogate model based on the generalized Polynomial Chaos Expansion (gPCE) is proposed as an approximation of the forward problem. This approach allows to efficiently compute the Sobol’ indices for the sensitivity analysis and greatly reduce the computational cost of the original ES and MDA formulations, also enhancing the accuracy of the overall prediction process.
      Citation: Algorithms
      PubDate: 2020-06-30
      DOI: 10.3390/a13070156
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 157: The RONO (Rank-Order-Normalization)
           Procedure for Power-Spectrum Analysis of Datasets with Non-Normal
           Distributions

    • Authors: Sturrock, Scholkmann
      First page: 157
      Abstract: Standard (Lomb-Scargle, likelihood, etc.) procedures for power-spectrum analysis provide convenient estimates of the significance of any peak in a power spectrum, based—typically—on the assumption that the measurements being analyzed have a normal (i.e. Gaussian) distribution. However, the measurement sequence provided by a real experiment or a real observational program may not meet this requirement. The RONO (rank-order normalization) procedure generates a proxy distribution that retains the rank-order of the original measurements but has a strictly normal distribution. The proxy distribution may then be analyzed by standard power-spectrum analysis. We show by an example that the resulting power spectrum may prove to be quite close to the power spectrum obtained from the original data by a standard procedure, even if the distribution of the original measurements is far from normal. Such a comparison would tend to validate the original analysis.
      Citation: Algorithms
      PubDate: 2020-06-30
      DOI: 10.3390/a13070157
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 158: Fuzzy C-Means Clustering Algorithm with
           Multiple Fuzzification Coefficients

    • Authors: Khang, Vuong, Tran, Fowler
      First page: 158
      Abstract: Clustering is an unsupervised machine learning technique with many practical applications that has gathered extensive research interest. Aside from deterministic or probabilistic techniques, fuzzy C-means clustering (FCM) is also a common clustering technique. Since the advent of the FCM method, many improvements have been made to increase clustering efficiency. These improvements focus on adjusting the membership representation of elements in the clusters, or on fuzzifying and defuzzifying techniques, as well as the distance function between elements. This study proposes a novel fuzzy clustering algorithm using multiple different fuzzification coefficients depending on the characteristics of each data sample. The proposed fuzzy clustering method has similar calculation steps to FCM with some modifications. The formulas are derived to ensure convergence. The main contribution of this approach is the utilization of multiple fuzzification coefficients as opposed to only one coefficient in the original FCM algorithm. The new algorithm is then evaluated with experiments on several common datasets and the results show that the proposed algorithm is more efficient compared to the original FCM as well as other clustering methods.
      Citation: Algorithms
      PubDate: 2020-06-30
      DOI: 10.3390/a13070158
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 159: Stream-Based Lossless Data Compression
           Applying Adaptive Entropy Coding for Hardware-Based Implementation

    • Authors: Shinichi Yamagiwa, Hayakawa, Marumo
      First page: 159
      Abstract: Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the current I/O performance does not improve so drastically, it is the time to consider another way to increase it. To overcome this challenge, we focus on lossless data compression technology to decrease the amount of data itself in the data communication path. The recent Big Data applications treat data stream that flows continuously and never allow stalling processing due to the high speed. Therefore, an elegant hardware-based data compression technology is demanded. This paper proposes a novel lossless data compression, called ASE coding. It encodes streaming data by applying the entropy coding approach. ASE coding instantly assigns the fewest bits to the corresponding compressed data according to the number of occupied entries in a look-up table. This paper describes the detailed mechanism of ASE coding. Furthermore, the paper demonstrates performance evaluations to promise that ASE coding adaptively shrinks streaming data and also works on a small amount of hardware resources without stalling or buffering any part of data stream.
      Citation: Algorithms
      PubDate: 2020-06-30
      DOI: 10.3390/a13070159
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 160: Text Semantic Annotation: A Distributed
           Methodology Based on Community Coherence

    • Authors: Christos Makris, Georgios Pispirigos, Michael Angelos Simos
      First page: 160
      Abstract: Text annotation is the process of identifying the sense of a textual segment within a given context to a corresponding entity on a concept ontology. As the bag of words paradigm’s limitations become increasingly discernible in modern applications, several information retrieval and artificial intelligence tasks are shifting to semantic representations for addressing the inherent natural language polysemy and homonymy challenges. With extensive application in a broad range of scientific fields, such as digital marketing, bioinformatics, chemical engineering, neuroscience, and social sciences, community detection has attracted great scientific interest. Focusing on linguistics, by aiming to identify groups of densely interconnected subgroups of semantic ontologies, community detection application has proven beneficial in terms of disambiguation improvement and ontology enhancement. In this paper we introduce a novel distributed supervised knowledge-based methodology employing community detection algorithms for text annotation with Wikipedia Entities, establishing the unprecedented concept of community Coherence as a metric for local contextual coherence compatibility. Our experimental evaluation revealed that deeper inference of relatedness and local entity community coherence in the Wikipedia graph bears substantial improvements overall via a focus on accuracy amelioration of less common annotations. The proposed methodology is propitious for wider adoption, attaining robust disambiguation performance.
      Citation: Algorithms
      PubDate: 2020-07-01
      DOI: 10.3390/a13070160
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 161: CONDA-PM—A Systematic Review and
           Framework for Concept Drift Analysis in Process Mining

    • Authors: Ghada Elkhawaga, Mervat Abuelkheir, Sherif I. Barakat, Alaa M. Riad, Manfred Reichert
      First page: 161
      Abstract: Business processes evolve over time to adapt to changing business environments. This requires continuous monitoring of business processes to gain insights into whether they conform to the intended design or deviate from it. The situation when a business process changes while being analysed is denoted as Concept Drift. Its analysis is concerned with studying how a business process changes, in terms of detecting and localising changes and studying the effects of the latter. Concept drift analysis is crucial to enable early detection and management of changes, that is, whether to promote a change to become part of an improved process, or to reject the change and make decisions to mitigate its effects. Despite its importance, there exists no comprehensive framework for analysing concept drift types, affected process perspectives, and granularity levels of a business process. This article proposes the CONcept Drift Analysis in Process Mining (CONcept Drift Analysis in Process Mining Framework. A four-staged framework providing guidance on the fundamental components of a concept drift analysis approach in the context of process mining (CONDA-PM,)) framework describing phases and requirements of a concept drift analysis approach. CONDA-PM was derived from a Systematic Literature Review (Systematic Literature Review. A survey of a topic conducted according to systematic steps and adopts a certain format (SLR,)) of current approaches analysing concept drift. We apply the CONDA-PM framework on current approaches to concept drift analysis and evaluate their maturity. Applying CONDA-PM framework highlights areas where research is needed to complement existing efforts.
      Citation: Algorithms
      PubDate: 2020-07-03
      DOI: 10.3390/a13070161
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 162: Sensitivity Analysis for Microscopic Crowd
           Simulation

    • Authors: Marion Gödel, Rainer Fischer, Gerta Köster
      First page: 162
      Abstract: Microscopic crowd simulation can help to enhance the safety of pedestrians in situations that range from museum visits to music festivals. To obtain a useful prediction, the input parameters must be chosen carefully. In many cases, a lack of knowledge or limited measurement accuracy add uncertainty to the input. In addition, for meaningful parameter studies, we first need to identify the most influential parameters of our parametric computer models. The field of uncertainty quantification offers standardized and fully automatized methods that we believe to be beneficial for pedestrian dynamics. In addition, many methods come at a comparatively low cost, even for computationally expensive problems. This allows for their application to larger scenarios. We aim to identify and adapt fitting methods to microscopic crowd simulation in order to explore their potential in pedestrian dynamics. In this work, we first perform a variance-based sensitivity analysis using Sobol’ indices and then crosscheck the results by a derivative-based measure, the activity scores. We apply both methods to a typical scenario in crowd simulation, a bottleneck. Because constrictions can lead to high crowd densities and delays in evacuations, several experiments and simulation studies have been conducted for this setting. We show qualitative agreement between the results of both methods. Additionally, we identify a one-dimensional subspace in the input parameter space and discuss its impact on the simulation. Moreover, we analyze and interpret the sensitivity indices with respect to the bottleneck scenario.
      Citation: Algorithms
      PubDate: 2020-07-05
      DOI: 10.3390/a13070162
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 163: An Interval Type-2 Fuzzy Risk Analysis
           Model (IT2FRAM) for Determining Construction Project Contingency Reserve

    • Authors: Fateminia, Sumati, Fayek
      First page: 163
      Abstract: Determining contingency reserve is critical to project risk management. Classic methods of determining contingency reserve significantly rely on historical data and fail to effectively incorporate certain types of uncertainties such as vagueness, ambiguity, and subjectivity. In this paper, an interval type-2 fuzzy risk analysis model (IT2FRAM) is introduced in order to determine the contingency reserve. In IT2FRAM, the membership functions for the linguistic terms used to describe the probability, impact of risk and the opportunity events are developed, optimized, and aggregated using interval type-2 fuzzy sets and the principle of justifiable granularity. IT2FRAM is an extension of a fuzzy arithmetic-based risk analysis method which considers such uncertainties and addresses the limitations of probabilistic and deterministic techniques of contingency determination methods. The contribution of IT2FRAM is that it considers the opinions of several subject matter experts to develop the membership functions of linguistic terms. Moreover, the effect of outlier opinions in developing the membership functions of linguistic terms are reduced. IT2FRAM also enables the aggregation of non-linear membership functions into trapezoidal membership functions. A hypothetical case study is presented in order to illustrate the application of IT2FRAM in Fuzzy Risk Analyzer© (FRA©), a risk analysis software.
      Citation: Algorithms
      PubDate: 2020-07-07
      DOI: 10.3390/a13070163
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 164: Nonparametric Estimation of Continuously
           Parametrized Families of Probability Density Functions—Computational
           Aspects

    • Authors: Wojciech Rafajłowicz
      First page: 164
      Abstract: We consider a rather general problem of nonparametric estimation of an uncountable set of probability density functions (p.d.f.’s) of the form: f ( x ; r ) , where r is a non-random real variable and ranges from R 1 to R 2 . We put emphasis on the algorithmic aspects of this problem, since they are crucial for exploratory analysis of big data that are needed for the estimation. A specialized learning algorithm, based on the 2D FFT, is proposed and tested on observations that allow for estimate p.d.f.’s of a jet engine temperatures as a function of its rotation speed. We also derive theoretical results concerning the convergence of the estimation procedure that contains hints on selecting parameters of the estimation algorithm.
      Citation: Algorithms
      PubDate: 2020-07-08
      DOI: 10.3390/a13070164
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 165: Image Edge Detector with Gabor Type
           Filters Using a Spiking Neural Network of Biologically Inspired Neurons

    • Authors: Krishnamurthy V. Vemuru
      First page: 165
      Abstract: We report the design of a Spiking Neural Network (SNN) edge detector with biologically inspired neurons that has a conceptual similarity with both Hodgkin-Huxley (HH) model neurons and Leaky Integrate-and-Fire (LIF) neurons. The computation of the membrane potential, which is used to determine the occurrence or absence of spike events, at each time step, is carried out by using the analytical solution to a simplified version of the HH neuron model. We find that the SNN based edge detector detects more edge pixels in images than those obtained by a Sobel edge detector. We designed a pipeline for image classification with a low-exposure frame simulation layer, SNN edge detection layers as pre-processing layers and a Convolutional Neural Network (CNN) as a classification module. We tested this pipeline for the task of classification with the Digits dataset, which is available in MATLAB. We find that the SNN based edge detection layer increases the image classification accuracy at lower exposure times, that is, for 1 < t < T /4, where t is the number of milliseconds in a simulated exposure frame and T is the total exposure time, with reference to a Sobel edge or Canny edge detection layer in the pipeline. These results pave the way for developing novel cognitive neuromorphic computing architectures for millisecond timescale detection and object classification applications using event or spike cameras.
      Citation: Algorithms
      PubDate: 2020-07-09
      DOI: 10.3390/a13070165
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 166: Polyhedral DC Decomposition and DCA
           Optimization of Piecewise Linear Functions

    • Authors: Andreas Griewank, Andrea Walther
      First page: 166
      Abstract: For piecewise linear functions f : R n ↦ R we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex f and a concave part f ^ , including a pair of generalized gradients g ∈ R n ∋ g ^ . The latter satisfy strict chain rules and can be computed in the reverse mode of algorithmic differentiation, at a small multiple of the cost of evaluating f itself. It is shown how f and f ^ can be expressed as a single maximum and a single minimum of affine functions, respectively. The two subgradients g and − g ^ are then used to drive DCA algorithms, where the (convex) inner problem can be solved in finitely many steps, e.g., by a Simplex variant or the true steepest descent method. Using a reflection technique to update the gradients of the concave part, one can ensure finite convergence to a local minimizer of f, provided the Linear Independence Kink Qualification holds. For piecewise smooth objectives the approach can be used as an inner method for successive piecewise linearization.
      Citation: Algorithms
      PubDate: 2020-07-11
      DOI: 10.3390/a13070166
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 167: Biologically Inspired Visual System
           Architecture for Object Recognition in Autonomous Systems

    • Authors: Dan Malowany, Hugo Guterman
      First page: 167
      Abstract: Computer vision is currently one of the most exciting and rapidly evolving fields of science, which affects numerous industries. Research and development breakthroughs, mainly in the field of convolutional neural networks (CNNs), opened the way to unprecedented sensitivity and precision in object detection and recognition tasks. Nevertheless, the findings in recent years on the sensitivity of neural networks to additive noise, light conditions, and to the wholeness of the training dataset, indicate that this technology still lacks the robustness needed for the autonomous robotic industry. In an attempt to bring computer vision algorithms closer to the capabilities of a human operator, the mechanisms of the human visual system was analyzed in this work. Recent studies show that the mechanisms behind the recognition process in the human brain include continuous generation of predictions based on prior knowledge of the world. These predictions enable rapid generation of contextual hypotheses that bias the outcome of the recognition process. This mechanism is especially advantageous in situations of uncertainty, when visual input is ambiguous. In addition, the human visual system continuously updates its knowledge about the world based on the gaps between its prediction and the visual feedback. CNNs are feed forward in nature and lack such top-down contextual attenuation mechanisms. As a result, although they process massive amounts of visual information during their operation, the information is not transformed into knowledge that can be used to generate contextual predictions and improve their performance. In this work, an architecture was designed that aims to integrate the concepts behind the top-down prediction and learning processes of the human visual system with the state-of-the-art bottom-up object recognition models, e.g., deep CNNs. The work focuses on two mechanisms of the human visual system: anticipation-driven perception and reinforcement-driven learning. Imitating these top-down mechanisms, together with the state-of-the-art bottom-up feed-forward algorithms, resulted in an accurate, robust, and continuously improving target recognition model.
      Citation: Algorithms
      PubDate: 2020-07-11
      DOI: 10.3390/a13070167
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 168: On the Relationship between Self-Admitted
           Technical Debt Removals and Technical Debt Measures

    • Authors: Lerina Aversano, Martina Iammarino, Mimmo Carapella, Andrea Del Vecchio, Laura Nardi
      First page: 168
      Abstract: * Correspondence: aversano@unisannio [...]
      Citation: Algorithms
      PubDate: 2020-07-11
      DOI: 10.3390/a13070168
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 169: TBRNet: Two-Stream BiLSTM Residual Network
           for Video Action Recognition

    • Authors: Xiao Wu, Qingge Ji
      First page: 169
      Abstract: Modeling spatiotemporal representations is one of the most essential yet challenging issues in video action recognition. Existing methods lack the capacity to accurately model either the correlations between spatial and temporal features or the global temporal dependencies. Inspired by the two-stream network for video action recognition, we propose an encoder–decoder framework named Two-Stream Bidirectional Long Short-Term Memory (LSTM) Residual Network (TBRNet) which takes advantage of the interaction between spatiotemporal representations and global temporal dependencies. In the encoding phase, the two-stream architecture, based on the proposed Residual Convolutional 3D (Res-C3D) network, extracts features with residual connections inserted between the two pathways, and then the features are fused to become the short-term spatiotemporal features of the encoder. In the decoding phase, those short-term spatiotemporal features are first fed into a temporal attention-based bidirectional LSTM (BiLSTM) network to obtain long-term bidirectional attention-pooling dependencies. Subsequently, those temporal dependencies are integrated with short-term spatiotemporal features to obtain global spatiotemporal relationships. On two benchmark datasets, UCF101 and HMDB51, we verified the effectiveness of our proposed TBRNet by a series of experiments, and it achieved competitive or even better results compared with existing state-of-the-art approaches.
      Citation: Algorithms
      PubDate: 2020-07-15
      DOI: 10.3390/a13070169
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 170: On the Well-Posedness of A High Order
           Convective Cahn-Hilliard Type Equations

    • Authors: Giuseppe Maria Coclite, Lorenzo di Ruvo
      First page: 170
      Abstract: High order convective Cahn-Hilliard type equations describe the faceting of a growing surface, or the dynamics of phase transitions in ternary oil-water-surfactant systems. In this paper, we prove the well-posedness of the classical solutions for the Cauchy problem, associated with this equation.
      Citation: Algorithms
      PubDate: 2020-07-16
      DOI: 10.3390/a13070170
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 171: Exact Method for Generating
           Strategy-Solvable Sudoku Clues

    • Authors: Kohei Nishikawa, Takahisa Toda
      First page: 171
      Abstract: A Sudoku puzzle often has a regular pattern in the arrangement of initial digits and it is typically made solvable with known solving techniques called strategies. In this paper, we consider the problem of generating such Sudoku instances. We introduce a rigorous framework to discuss solvability for Sudoku instances with respect to strategies. This allows us to handle not only known strategies but also general strategies under a few reasonable assumptions. We propose an exact method for determining Sudoku clues for a given set of clue positions that is solvable with a given set of strategies. This is the first exact method except for a trivial brute-force search. Besides the clue generation, we present an application of our method to the problem of determining the minimum number of strategy-solvable Sudoku clues. We conduct experiments to evaluate our method, varying the position and the number of clues at random. Our method terminates within 1 min for many grids. However, as the number of clues gets closer to 20, the running time rapidly increases and exceeds the time limit set to 600 s. We also evaluate our method for several instances with 17 clue positions taken from known minimum Sudokus to see the efficiency for deciding unsolvability.
      Citation: Algorithms
      PubDate: 2020-07-16
      DOI: 10.3390/a13070171
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 172: Approximate Triangulations of Grassmann
           Manifolds

    • Authors: Kevin P. Knudson
      First page: 172
      Abstract: We define the notion of an approximate triangulation for a manifold M embedded in Euclidean space. The basic idea is to build a nested family of simplicial complexes whose vertices lie in M and use persistent homology to find a complex in the family whose homology agrees with that of M. Our key examples are various Grassmann manifolds G k ( R n ) .
      Citation: Algorithms
      PubDate: 2020-07-17
      DOI: 10.3390/a13070172
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 173: Modeling Hourly Soil Temperature Using
           Deep BiLSTM Neural Network

    • Authors: Cong Li, Yaonan Zhang, Xupeng Ren
      First page: 173
      Abstract: Soil temperature (ST) plays a key role in the processes and functions of almost all ecosystems, and is also an essential parameter for various applications such as agricultural production, geothermal development, and their utilization. Although numerous machine learning models have been used in the prediction of ST, and good results have been obtained, most of the current studies have focused on daily or monthly ST predictions, while hourly ST predictions are scarce. This paper presents a novel scheme for forecasting the hourly ST using weather forecast data. The method considers the hourly ST prediction to be the superposition of two parts, namely, the daily average ST prediction and the ST amplitude (the difference between the hourly ST and the daily average ST) prediction. According to the results of correlation analysis, we selected nine meteorological parameters and combined two temporal parameters as the input vectors for predicting the daily average ST. For the task of predicting the ST amplitude, seven meteorological parameters and one temporal parameter were selected as the inputs. Two submodels were constructed using a deep bidirectional long short-term memory network (BiLSTM). For the task of hourly ST prediction at five different soil depths at 30 sites, which are located in 5 common climates in the United States, the results showed the method proposed in this paper performs best at all depths for 30 stations (100% of all) for the root mean square error (RMSE), 27 stations (90% of all) for the mean absolute error (MAE), and 30 stations (100% of all) for the coefficient of determination (R2), respectively. Moreover, the method adopted in this study displays a stronger ST prediction ability than the traditional methods under all climate types involved in the experiment, the hourly ST produced by it can be used as a driving parameter for high-resolution biogeochemical models, land surface models and hydrological models and can provide ideas for an analysis of other time series data.
      Citation: Algorithms
      PubDate: 2020-07-17
      DOI: 10.3390/a13070173
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 174: An Algorithm for Density Enrichment of
           Sparse Collaborative Filtering Datasets Using Robust Predictions as
           Derived Ratings

    • Authors: Margaris, Spiliotopoulos, Karagiorgos, Vassilakis
      First page: 174
      Abstract: Collaborative filtering algorithms formulate personalized recommendations for a user, first by analysing already entered ratings to identify other users with similar tastes to the user (termed as near neighbours), and then using the opinions of the near neighbours to predict which items the target user would like. However, in sparse datasets, too few near neighbours can be identified, resulting in low accuracy predictions and even a total inability to formulate personalized predictions. This paper addresses the sparsity problem by presenting an algorithm that uses robust predictions, that is predictions deemed as highly probable to be accurate, as derived ratings. Thus, the density of sparse datasets increases, and improved rating prediction coverage and accuracy are achieved. The proposed algorithm, termed as CFDR, is extensively evaluated using (1) seven widely-used collaborative filtering datasets, (2) the two most widely-used correlation metrics in collaborative filtering research, namely the Pearson correlation coefficient and the cosine similarity, and (3) the two most widely-used error metrics in collaborative filtering, namely the mean absolute error and the root mean square error. The evaluation results show that, by successfully increasing the density of the datasets, the capacity of collaborative filtering systems to formulate personalized and accurate recommendations is considerably improved.
      Citation: Algorithms
      PubDate: 2020-07-17
      DOI: 10.3390/a13070174
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 175: Equivalence of the Frame and Halting
           Problems

    • Authors: Eric Dietrich, Chris Fields
      First page: 175
      Abstract: The open-domain Frame Problem is the problem of determining what features of an open task environment need to be updated following an action. Here we prove that the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable. We discuss two other open-domain problems closely related to the Frame Problem, the system identification problem and the symbol-grounding problem, and show that they are similarly undecidable. We then reformulate the Frame Problem as a quantum decision problem, and show that it is undecidable by any finite quantum computer.
      Citation: Algorithms
      PubDate: 2020-07-20
      DOI: 10.3390/a13070175
      Issue No: Vol. 13, No. 7 (2020)
       
  • Algorithms, Vol. 13, Pages 131: Unsupervised Text Feature Selection Using
           Memetic Dichotomous Differential Evolution

    • Authors: Ibraheem Al-Jadir, Kok Wai Wong, Chun Che Fung, Hong Xie
      First page: 131
      Abstract: Feature Selection (FS) methods have been studied extensively in the literature, and there are a crucial component in machine learning techniques. However, unsupervised text feature selection has not been well studied in document clustering problems. Feature selection could be modelled as an optimization problem due to the large number of possible solutions that might be valid. In this paper, a memetic method that combines Differential Evolution (DE) with Simulated Annealing (SA) for unsupervised FS was proposed. Due to the use of only two values indicating the existence or absence of the feature, a binary version of differential evolution is used. A dichotomous DE was used for the purpose of the binary version, and the proposed method is named Dichotomous Differential Evolution Simulated Annealing (DDESA). This method uses dichotomous mutation instead of using the standard mutation DE to be more effective for binary purposes. The Mean Absolute Distance (MAD) filter was used as the feature subset internal evaluation measure in this paper. The proposed method was compared with other state-of-the-art methods including the standard DE combined with SA, which is named DESA in this paper, using five benchmark datasets. The F-micro, F-macro (F-scores) and Average Distance of Document to Cluster (ADDC) measures were utilized as the evaluation measures. The Reduction Rate (RR) was also used as an evaluation measure. Test results showed that the proposed DDESA outperformed the other tested methods in performing the unsupervised text feature selection.
      Citation: Algorithms
      PubDate: 2020-05-26
      DOI: 10.3390/a13060131
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 132: Short-Term Wind Speed Forecasting Using
           Statistical and Machine Learning Methods

    • Authors: Lucky O. Daniel, Caston Sigauke, Colin Chibaya, Rendani Mbuvha
      First page: 132
      Abstract: Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed for wind energy generation, using data obtained from the South African Wind Atlas Project. Forecasting is carried out on a two days ahead time horizon. We investigate the predictive performance of artificial neural networks (ANN) trained with Bayesian regularisation, decision trees based stochastic gradient boosting (SGB) and generalised additive models (GAMs). The results of the comparative analysis suggest that ANN displays superior predictive performance based on root mean square error (RMSE). In contrast, SGB shows outperformance in terms of mean average error (MAE) and the related mean average percentage error (MAPE). A further comparison of two forecast combination methods involving the linear and additive quantile regression averaging show the latter forecast combination method as yielding lower prediction accuracy. The additive quantile regression averaging based prediction intervals also show outperformance in terms of validity, reliability, quality and accuracy. Interval combination methods show the median method as better than its pure average counterpart. Point forecasts combination and interval forecasting methods are found to improve forecast performance.
      Citation: Algorithms
      PubDate: 2020-05-26
      DOI: 10.3390/a13060132
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 133: Metric Embedding Learning on
           Multi-Directional Projections

    • Authors: Gábor Kertész
      First page: 133
      Abstract: Image based instance recognition is a difficult problem, in some cases even for the human eye. While latest developments in computer vision—mostly driven by deep learning—have shown that high performance models for classification or categorization can be engineered, the problem of discriminating similar objects with a low number of samples remain challenging. Advances from multi-class classification are applied for object matching problems, as the feature extraction techniques are the same; nature-inspired multi-layered convolutional nets learn the representations, and the output of such a model maps them to a multidimensional encoding space. A metric based loss brings same instance embeddings close to each other. While these solutions achieve high classification performance, low efficiency is caused by memory cost of high parameter number, which is in a relationship with input image size. Upon shrinking the input, the model requires less trainable parameters, while performance decreases. This drawback is tackled by using compressed feature extraction, e.g., projections. In this paper, a multi-directional image projection transformation with fixed vector lengths (MDIPFL) is applied for one-shot recognition tasks, trained on Siamese and Triplet architectures. Results show, that MDIPFL based approach achieves decent performance, despite of the significantly lower number of parameters.
      Citation: Algorithms
      PubDate: 2020-05-29
      DOI: 10.3390/a13060133
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 134: Study of Quasi-Static Magnetization with
           the Random-Field Ising Model

    • Authors: Roman Gozdur
      First page: 134
      Abstract: The topic of this paper is modeling based on Hamiltonian spin interactions. Preliminary studies on the identification of quasi-static magnetizing field in a magnetic system were presented. The random-field Ising model was then used to simulate the simplified ferromagnetic structure. The validation of algorithms and simulation tests were carried out for the 2D and the 3D model spaces containing at least 106 unit cells. The research showed that the response of a slowly driven magnetic system did not depend on the external field sweep rate. Changes in the spatial magnetization of the lattice were very similar below a certain rate of the external field change known as the quasi-static boundary. The observed differences in obtained magnetization curves under quasi-static conditions stemmed from the random nature of the molecular field and the avalanche-like magnetization process
      Citation: Algorithms
      PubDate: 2020-05-29
      DOI: 10.3390/a13060134
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 135: A Recursive Least-Squares Algorithm for
           the Identification of Trilinear Forms

    • Authors: Camelia Elisei-Iliescu, Laura-Maria Dogariu, Constantin Paleologu, Jacob Benesty, Andrei-Alexandru Enescu, Silviu Ciochină
      First page: 135
      Abstract: High-dimensional system identification problems can be efficiently addressed based on tensor decompositions and modelling. In this paper, we design a recursive least-squares (RLS) algorithm tailored for the identification of trilinear forms, namely RLS-TF. In our framework, the trilinear form is related to the decomposition of a third-order tensor (of rank one). The proposed RLS-TF algorithm acts on the individual components of the global impulse response, thus being efficient in terms of both performance and complexity. Simulation results indicate that the proposed solution outperforms the conventional RLS algorithm (which handles only the global impulse response), but also the previously developed trilinear counterparts based on the least-mean- squares algorithm.
      Citation: Algorithms
      PubDate: 2020-06-01
      DOI: 10.3390/a13060135
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 136: Improved Convergence Speed of a DCD-Based
           Algorithm for Sparse Solutions

    • Authors: Zhi Quan, Shuhua Lv
      First page: 136
      Abstract: To solve a system of equations that needs few updates, such as sparse systems, the leading dichotomous coordinate descent (DCD) algorithm is better than the cyclic DCD algorithm because of its fast speed of convergence. In the case of sparse systems requiring a large number of updates, the cyclic DCD algorithm converges faster and has a lower error level than the leading DCD algorithm. However, the leading DCD algorithm has a faster convergence speed in the initial updates. In this paper, we propose a combination of leading and cyclic DCD iterations, the leading-cyclic DCD algorithm, to improve the convergence speed of the cyclic DCD algorithm. The proposed algorithm involves two steps. First, by properly selecting the number of updates of the solution vector used in the leading DCD algorithm, a solution is obtained from the leading DCD algorithm. Second, taking the output of the leading DCD algorithm as the initial values, an improved soft output is generated by the cyclic DCD algorithm with a large number of iterations. Numerical results demonstrate that when the solution sparsity γ is in the interval [ 1 / 8 , 6 / 8 ] , the proposed leading-cyclic DCD algorithm outperforms both the existing cyclic and leading DCD algorithms for all iterations.
      Citation: Algorithms
      PubDate: 2020-06-04
      DOI: 10.3390/a13060136
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 137: Sparse Logistic Regression: Comparison of
           Regularization and Bayesian Implementations

    • Authors: Mattia Zanon, Giuliano Zambonin, Gian Antonio Susto, Seán McLoone
      First page: 137
      Abstract: In knowledge-based systems, besides obtaining good output prediction accuracy, it is crucial to understand the subset of input variables that have most influence on the output, with the goal of gaining deeper insight into the underlying process. These requirements call for logistic model estimation techniques that provide a sparse solution, i.e., where coefficients associated with non-important variables are set to zero. In this work we compare the performance of two methods: the first one is based on the well known Least Absolute Shrinkage and Selection Operator (LASSO) which involves regularization with an ℓ 1 norm; the second one is the Relevance Vector Machine (RVM) which is based on a Bayesian implementation of the linear logistic model. The two methods are extensively compared in this paper, on real and simulated datasets. Results show that, in general, the two approaches are comparable in terms of prediction performance. RVM outperforms the LASSO both in term of structure recovery (estimation of the correct non-zero model coefficients) and prediction accuracy when the dimensionality of the data tends to increase. However, LASSO shows comparable performance to RVM when the dimensionality of the data is much higher than number of samples that is p > > n .
      Citation: Algorithms
      PubDate: 2020-06-08
      DOI: 10.3390/a13060137
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 138: Late Acceptance Hill-Climbing Matheuristic
           for the General Lot Sizing and Scheduling Problem with Rich Constraints

    • Authors: Andreas Goerler , Eduardo Lalla-Ruiz, Stefan Voß 
      First page: 138
      Abstract: This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To address this problem, we propose the Late Acceptance Hill-climbing Matheuristic (LAHCM) as a novel solution framework that exploits and integrates the late acceptance hill climbing algorithm and exact approaches for speeding up the solution process in comparison to solving the problem by means of a general solver. The computational results show the benefits of incorporating exact approaches within the LAHCM template leading to high-quality solutions within short computational times.
      Citation: Algorithms
      PubDate: 2020-06-09
      DOI: 10.3390/a13060138
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 139: Optimization Algorithms for Detection of
           Social Interactions

    • Authors: Vincenzo Cutello, Georgia Fargetta, Mario Pavone, Rocco A. Scollo
      First page: 139
      Abstract: Community detection is one of the most challenging and interesting problems in many research areas. Being able to detect highly linked communities in a network can lead to many benefits, such as understanding relationships between entities or interactions between biological genes, for instance. Two different immunological algorithms have been designed for this problem, called Opt-IA and Hybrid-IA, respectively. The main difference between the two algorithms is the search strategy and related immunological operators developed: the first carries out a random search together with purely stochastic operators; the last one is instead based on a deterministic Local Search that tries to refine and improve the current solutions discovered. The robustness of Opt-IA and Hybrid-IA has been assessed on several real social networks. These same networks have also been considered for comparing both algorithms with other seven different metaheuristics and the well-known greedy optimization Louvain algorithm. The experimental analysis conducted proves that Opt-IA and Hybrid-IA are reliable optimization methods for community detection, outperforming all compared algorithms.
      Citation: Algorithms
      PubDate: 2020-06-11
      DOI: 10.3390/a13060139
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 140: Special Issue on Ensemble Learning and
           Applications

    • Authors: Panagiotis Pintelas, Ioannis E. Livieris
      First page: 140
      Abstract: During the last decades, in the area of machine learning and data mining, the development of ensemble methods has gained a significant attention from the scientific community. Machine learning ensemble methods combine multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Combining multiple learning models has been theoretically and experimentally shown to provide significantly better performance than their single base learners. In the literature, ensemble learning algorithms constitute a dominant and state-of-the-art approach for obtaining maximum performance, thus they have been applied in a variety of real-world problems ranging from face and emotion recognition through text classification and medical diagnosis to financial forecasting.
      Citation: Algorithms
      PubDate: 2020-06-11
      DOI: 10.3390/a13060140
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 141: Dynamic Ring Exploration with (H,S) View

    • Authors: Tsuyoshi Gotoh, Yuichi Sudo, Fukuhito Ooshita, Toshimitsu Masuzawa
      First page: 141
      Abstract: The researches about a mobile entity (called agent) on dynamic networks have attracted a lot of attention in recent years. Exploration which requires an agent to visit all the nodes in the network is one of the most fundamental problems. While the exploration of dynamic networks with complete information or with no information about network changes has been studied, an agent with partial information about the network changes has not been considered yet despite its practical importance. In this paper, we consider the exploration of dynamic networks by a single agent with partial information about network changes. To the best of our knowledge, this is the very first work to investigate the exploration problem with such partial information. As a first step in this research direction, we focus on 1-interval connected rings as dynamic networks in this paper. We assume that the single agent has partial information called the ( H , S ) view by which it always knows whether or not each of the links within H hops is available in each of the next S time steps. In this setting, we show that H + S ≥ n and S ≥ ⌈ n / 2 ⌉ (n is the size of the network) are necessary and sufficient conditions to explore 1-interval connected rings. Moreover, we investigate the upper and lower bounds of the exploration time. It is proven that the exploration time is O ( n 2 ) for ⌈ n / 2 ⌉ ≤ S < 2 H ′ − 1 , O ( n 2 / H + n H ) for S ≥ max ( ⌈ n / 2 ⌉ , 2 H ′ − 1 ) , O ( n 2 / H + n log H ) for S ≥ n − 1 , and Ω ( n 2 / H ) for any S where H ′ = min ( H , ⌊ n / 2 ⌋ ) .
      Citation: Algorithms
      PubDate: 2020-06-12
      DOI: 10.3390/a13060141
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 142: Parallelized Swarm Intelligence Approach
           for Solving TSP and JSSP Problems

    • Authors: Jedrzejowicz, Wierzbowska
      First page: 142
      Abstract: One of the possible approaches to solving difficult optimization problems is applying population-based metaheuristics. Among such metaheuristics, there is a special class where searching for the best solution is based on the collective behavior of decentralized, self-organized agents. This study proposes an approach in which a swarm of agents tries to improve solutions from the population of solutions. The process is carried out in parallel threads. The proposed algorithm—based on the mushroom-picking metaphor—was implemented using Scala in an Apache Spark environment. An extended computational experiment shows how introducing a combination of simple optimization agents and increasing the number of threads may improve the results obtained by the model in the case of TSP and JSSP problems.
      Citation: Algorithms
      PubDate: 2020-06-12
      DOI: 10.3390/a13060142
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 143: Numerically Efficient Fuzzy MPC Algorithm
           with Advanced Generation of Prediction—Application to a Chemical Reactor
           

    • Authors: Piotr M. Marusak
      First page: 143
      Abstract: In Model Predictive Control (MPC) algorithms, control signals are generated after solving optimization problems. If the model used for prediction is linear then the optimization problem is a standard, easy to solve, quadratic programming problem with linear constraints. However, such an algorithm may offer insufficient performance if applied to a nonlinear control plant. On the other hand, if a model used for prediction is nonlinear, then non–convex optimization problem must be solved at each algorithm iteration. Then the numerical problems may occur during solving it and the time needed to calculate the control signals cannot be determined. Therefore approaches based on linearized models are preferred in practical applications. A fuzzy algorithm with an advanced generation of the prediction is proposed in the article. The prediction is obtained in such a way that the algorithm is formulated as a quadratic optimization problem but offers performance very close to that of the MPC algorithm with nonlinear optimization. The efficiency of the proposed approach is demonstrated in the control system of a nonlinear chemical control plant—a CSTR (Continuous Stirred–Tank Reactor) with van de Vusse reaction.
      Citation: Algorithms
      PubDate: 2020-06-14
      DOI: 10.3390/a13060143
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 144: Efficient Probabilistic Joint Inversion of
           Direct Current Resistivity and Small-Loop Electromagnetic Data

    • Authors: Christin Bobe, Daan Hanssens, Thomas Hermans, Ellen Van De Vijver
      First page: 144
      Abstract: Often, multiple geophysical measurements are sensitive to the same subsurface parameters. In this case, joint inversions are mostly preferred over two (or more) separate inversions of the geophysical data sets due to the expected reduction of the non-uniqueness in the joint inverse solution. This reduction can be quantified using Bayesian inversions. However, standard Markov chain Monte Carlo (MCMC) approaches are computationally expensive for most geophysical inverse problems. We present the Kalman ensemble generator (KEG) method as an efficient alternative to the standard MCMC inversion approaches. As proof of concept, we provide two synthetic studies of joint inversion of frequency domain electromagnetic (FDEM) and direct current (DC) resistivity data for a parameter model with vertical variation in electrical conductivity. For both studies, joint results show a considerable improvement for the joint framework over the separate inversions. This improvement consists of (1) an uncertainty reduction in the posterior probability density function and (2) an ensemble mean that is closer to the synthetic true electrical conductivities. Finally, we apply the KEG joint inversion to FDEM and DC resistivity field data. Joint field data inversions improve in the same way seen for the synthetic studies.
      Citation: Algorithms
      PubDate: 2020-06-18
      DOI: 10.3390/a13060144
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 145: Binary Time Series Classification with
           Bayesian Convolutional Neural Networks When Monitoring for Marine Gas
           Discharges

    • Authors: Kristian Gundersen, Guttorm Alendal, Anna Oleynik, Nello Blaser
      First page: 145
      Abstract: The world’s oceans are under stress from climate change, acidification and other human activities, and the UN has declared 2021–2030 as the decade for marine science. To monitor the marine waters, with the purpose of detecting discharges of tracers from unknown locations, large areas will need to be covered with limited resources. To increase the detectability of marine gas seepage we propose a deep probabilistic learning algorithm, a Bayesian Convolutional Neural Network (BCNN), to classify time series of measurements. The BCNN will classify time series to belong to a leak/no-leak situation, including classification uncertainty. The latter is important for decision makers who must decide to initiate costly confirmation surveys and, hence, would like to avoid false positives. Results from a transport model are used for the learning process of the BCNN and the task is to distinguish the signal from a leak hidden within the natural variability. We show that the BCNN classifies time series arising from leaks with high accuracy and estimates its associated uncertainty. We combine the output of the BCNN model, the posterior predictive distribution, with a Bayesian decision rule showcasing how the framework can be used in practice to make optimal decisions based on a given cost function.
      Citation: Algorithms
      PubDate: 2020-06-19
      DOI: 10.3390/a13060145
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 146: A Survey on Approximation in Parameterized
           Complexity: Hardness and Algorithms

    • Authors: Andreas Emil Feldmann, Karthik C. S., Euiwoong Lee, Pasin Manurangsi
      First page: 146
      Abstract: Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions.
      Citation: Algorithms
      PubDate: 2020-06-19
      DOI: 10.3390/a13060146
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 147: Local Comparison between Two Ninth
           Convergence Order Algorithms for Equations

    • Authors: Samundra Regmi, Ioannis K. Argyros, Santhosh George
      First page: 147
      Abstract: A local convergence comparison is presented between two ninth order algorithms for solving nonlinear equations. In earlier studies derivatives not appearing on the algorithms up to the 10th order were utilized to show convergence. Moreover, no error estimates, radius of convergence or results on the uniqueness of the solution that can be computed were given. The novelty of our study is that we address all these concerns by using only the first derivative which actually appears on these algorithms. That is how to extend the applicability of these algorithms. Our technique provides a direct comparison between these algorithms under the same set of convergence criteria. This technique can be used on other algorithms. Numerical experiments are utilized to test the convergence criteria.
      Citation: Algorithms
      PubDate: 2020-06-20
      DOI: 10.3390/a13060147
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 148: An Application of a Modified Gappy Proper
           Orthogonal Decomposition on Complexity Reduction of Allen-Cahn Equation

    • Authors: Chutipong Dechanubeksa, Saifon Chaturantabut
      First page: 148
      Abstract: This work considers model reduction techniques that can substantially decrease computational cost in simulating parmetrized Allen–Cahn equation. We first employ the proper orthogonal decomposition (POD) approach to reduce the number of unknowns in the full-order discretized system. Since POD cannot reduce the computational complexity of nonlinearity in Allen–Cahn equation, we also apply discrete empirical interpolation method (DEIM) to approximate the nonlinear term for a substantial reduction in overall simulation time. However, in general, the POD-DEIM approach is less accurate than the POD approach, since it further approximates the nonlinear term. To increase the accuracy of the POD-DEIM approach, this work introduces an extension of the DEIM approximation based on the concept of Gappy POD (GPOD), which is optimal in the least-squares sense. The POD-GPOD approach is tested and compared with the POD and POD-DEIM approaches on Allen–Cahn equation for both cases of fixed parameter value and varying parameter values. The modified GPOD approximation introduced in this work is demonstrated to improve accuracy of DEIM without sacrificing too much efficiency on the computational speedup, e.g., in one of our numerical tests, the POD-GPOD approach provides an approximate solution to the parmetrized Allen–Cahn equation 200 times faster than the full-order system with average error of order O ( 10 − 4 ) . The POD-GPOD approach is therefore shown to be a promising technique that compromises between the accuracy of POD approach and the efficiency of POD-DEIM approach.
      Citation: Algorithms
      PubDate: 2020-06-22
      DOI: 10.3390/a13060148
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 149: A Distributed Approach to the Evasion
           Problem

    • Authors: Denis Khryashchev, Jie Chu, Mikael Vejdemo-Johansson, Ping Ji
      First page: 149
      Abstract: The Evasion Problem is the question of whether—given a collection of sensors and a particular movement pattern over time—it is possible to stay undetected within the domain over the same stretch of time. It has been studied using topological techniques since 2006—with sufficient conditions for non-existence of an Evasion Path provided by de Silva and Ghrist; sufficient and necessary conditions with extended sensor capabilities provided by Adams and Carlsson; and sufficient and necessary conditions using sheaf theory by Krishnan and Ghrist. In this paper, we propose three algorithms for the Evasion Problem: one distributed algorithm extension of Adams’ approach for evasion path detection, and two different approaches to evasion path enumeration.
      Citation: Algorithms
      PubDate: 2020-06-23
      DOI: 10.3390/a13060149
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 150: Fibers of Failure: Classifying Errors in
           Predictive Processes

    • Authors: Leo S. Carlsson, Mikael Vejdemo-Johansson, Gunnar Carlsson, Pär G. Jönsson
      First page: 150
      Abstract: Predictive models are used in many different fields of science and engineering and are always prone to make faulty predictions. These faulty predictions can be more or less malignant depending on the model application. We describe fibers of failure (FImathsizesmallFAmathsizesmall), a method to classify failure modes of predictive processes. Our method uses MAPPERmathsizesmall, an algorithm from topological data analysis (TDA), to build a graphical model of input data stratified by prediction errors. We demonstrate two ways to use the failure mode groupings: either to produce a correction layer that adjusts predictions by similarity to the failure modes; or to inspect members of the failure modes to illustrate and investigate what characterizes each failure mode. We demonstrate FImathsizesmallFAmathsizesmall on two scenarios: a convolutional neural network (CNN) predicting MNIST images with added noise, and an artificial neural network (ANN) predicting the electrical energy consumption of an electric arc furnace (EAF). The correction layer on the CNN model improved its prediction accuracy significantly while the inspection of failure modes for the EAF model provided guiding insights into the domain-specific reasons behind several high-error regions.
      Citation: Algorithms
      PubDate: 2020-06-23
      DOI: 10.3390/a13060150
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 151: Compression of Next-Generation Sequencing
           Data and of DNA Digital Files †

    • Authors: Carpentieri
      First page: 151
      Abstract: The increase in memory and in network traffic used and caused by new sequenced biological data has recently deeply grown. Genomic projects such as HapMap and 1000 Genomes have contributed to the very large rise of databases and network traffic related to genomic data and to the development of new efficient technologies. The large-scale sequencing of samples of DNA has brought new attention and produced new research, and thus the interest in the scientific community for genomic data has greatly increased. In a very short time, researchers have developed hardware tools, analysis software, algorithms, private databases, and infrastructures to support the research in genomics. In this paper, we analyze different approaches for compressing digital files generated by Next-Generation Sequencing tools containing nucleotide sequences, and we discuss and evaluate the compression performance of generic compression algorithms by confronting them with a specific system designed by Jones et al. specifically for genomic file compression: Quip. Moreover, we present a simple but effective technique for the compression of DNA sequences in which we only consider the relevant DNA data and experimentally evaluate its performances.
      Citation: Algorithms
      PubDate: 2020-06-24
      DOI: 10.3390/a13060151
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 152: DS Evidence Theory-Based Energy Balanced
           Routing Algorithm for Network Lifetime Enhancement in WSN-Assisted IOT

    • Authors: Tang, Lu
      First page: 152
      Abstract: Wireless sensor networks (WSNs) can provide data acquisition for long-term environment monitoring, which are important parts of Internet of Things (IoT). In the WSN-assisted IoT, energy efficient routing algorithms are required to maintain a long network lifetime. In this paper, a DS evidence theory-based energy balanced routing algorithm for network lifetime enhancement (EBRA-NLE) in WSN-assisted IOT is proposed. From the perspective of energy balance and minimization of routing path energy consumption, three attribute indexes are established to evaluate the forward neighboring nodes. Then a route selection method based on DS evidence theory is developed to comprehensively evaluate the nodes and select the optimal next hop. In order to avoid missing the ideal solution because of the excessive difference between the index values, the sine function is used to adjust this difference. The simulation results show that the proposed EBRA-NLE has certain advantages in prolonging network lifetime and balancing energy between nodes.
      Citation: Algorithms
      PubDate: 2020-06-24
      DOI: 10.3390/a13060152
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 153: Novel Graph Model for Solving
           Collision-Free Multiple-Vehicle Traveling Salesman Problem Using Ant
           Colony Optimization

    • Authors: Pamosoaji, Setyohadi
      First page: 153
      Abstract: In this paper, a novel graph model to figure Collision-Free Multiple Traveling Salesman Problem (CFMTSP) is proposed. In this problem, a group of vehicles start from different nodes in an undirected graph and must visit each node in the graph, following the well-known Traveling Salesman Problem (TSP) fashion without any collision. This paper’s main objective is to obtain free-collision routes for each vehicle while minimizing the traveling time of the slowest vehicle. This problem can be approached by applying speed to each vehicle, and a novel augmented graph model can perform it. This approach accommodates not only the position of nodes and inter-node distances, but also the speed of all the vehicles is proposed. The proposed augmented graph should be able to be used to perform optimal trajectories, i.e., routes and speeds, for all vehicles. An ant colony optimization (ACO) algorithm is used on the proposed augmented graph. Simulations show that the algorithm can satisfy the main objective. Considered factors, such as limitation of the mission successfulness, i.e., the inter-vehicle arrival time on a node, the number of vehicles, and the numbers of vehicles and edges of the graph are also discussed.
      Citation: Algorithms
      PubDate: 2020-06-26
      DOI: 10.3390/a13060153
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 154: An Algorithm for Fuzzy Negations
           Based-Intuitionistic Fuzzy Copula Aggregation Operators in Multiple
           Attribute Decision Making

    • Authors: Giakoumakis, Papadopoulos
      First page: 154
      Abstract: In this paper, we develop a novel computation model of Intuitionistic Fuzzy Values with the usage of fuzzy negations and Archimedean copulas. This novel computation model’s structure is based on the extension of the existing operations of intuitionistic fuzzy values with some classes of fuzzy negations. Many properties of the proposed operations are investigated and proved. Additionally, in this paper we introduce the concepts of intuitionistic fuzzy Archimedean copula weighted arithmetic and geometric aggregation operators based on fuzzy negations, including a further analysis of their properties. Finally, using a case study from an already published paper we found that our method has many advantages.
      Citation: Algorithms
      PubDate: 2020-06-26
      DOI: 10.3390/a13060154
      Issue No: Vol. 13, No. 6 (2020)
       
  • Algorithms, Vol. 13, Pages 106: Diagnosis in Tennis Serving Technique

    • Authors: Eugenio Roanes-Lozano, Eduardo A. Casella, Fernando Sánchez, Antonio Hernando
      First page: 106
      Abstract: Tennis is a sport with a very complex technique. Amateur tennis players have trainers and/or coaches, but are not usually accompanied by them to championships. Curiously, in this sport, the result of many matches can be changed by a small hint like `hit the ball a little higher when serving’. However, the biomechanical of a tennis stroke is only clear to an expert. We, therefore, developed a prototype of a rule-based expert system (RBES) aimed at an amateur competition player that is not accompanied by his/her coach to a championship and is not serving as usual (the RBES is so far restricted to serving). The player has to answer a set of questions about how he/she is serving that day and his/her usual serving technique and the RBES obtains a diagnosis using logic inference about the possible reasons (according of the logic rules that have been previously given to the RBES). A certain knowledge of the tennis terminology and technique is required from the player, but that is something known at this level. The underlying logic is Boolean and the inference engine is algebraic (it uses Groebner bases).
      Citation: Algorithms
      PubDate: 2020-04-25
      DOI: 10.3390/a13050106
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 107: How to Inspect and Measure Data Quality
           about Scientific Publications: Use Case of Wikipedia and CRIS Databases

    • Authors: Otmane Azeroual, Włodzimierz Lewoniewski
      First page: 107
      Abstract: The quality assurance of publication data in collaborative knowledge bases and in current research information systems (CRIS) becomes more and more relevant by the use of freely available spatial information in different application scenarios. When integrating this data into CRIS, it is necessary to be able to recognize and assess their quality. Only then is it possible to compile a result from the available data that fulfills its purpose for the user, namely to deliver reliable data and information. This paper discussed the quality problems of source metadata in Wikipedia and CRIS. Based on real data from over 40 million Wikipedia articles in various languages, we performed preliminary quality analysis of the metadata of scientific publications using a data quality tool. So far, no data quality measurements have been programmed with Python to assess the quality of metadata from scientific publications in Wikipedia and CRIS. With this in mind, we programmed the methods and algorithms as code, but presented it in the form of pseudocode in this paper to measure the quality related to objective data quality dimensions such as completeness, correctness, consistency, and timeliness. This was prepared as a macro service so that the users can use the measurement results with the program code to make a statement about their scientific publications metadata so that the management can rely on high-quality data when making decisions.
      Citation: Algorithms
      PubDate: 2020-04-26
      DOI: 10.3390/a13050107
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 108: Investigation of the iCC Framework
           Performance for Solving Constrained LSGO Problems

    • Authors: Alexey Vakhnin, Evgenii Sopov
      First page: 108
      Abstract: Many modern real-valued optimization tasks use “black-box” (BB) models for evaluating objective functions and they are high-dimensional and constrained. Using common classifications, we can identify them as constrained large-scale global optimization (cLSGO) tasks. Today, the IEEE Congress of Evolutionary Computation provides a special session and several benchmarks for LSGO. At the same time, cLSGO problems are not well studied yet. The majority of modern optimization techniques demonstrate insufficient performance when confronted with cLSGO tasks. The effectiveness of evolution algorithms (EAs) in solving constrained low-dimensional optimization problems has been proven in many scientific papers and studies. Moreover, the cooperative coevolution (CC) framework has been successfully applied for EA used to solve LSGO problems. In this paper, a new approach for solving cLSGO has been proposed. This approach is based on CC and a method that increases the size of groups of variables at the decomposition stage (iCC) when solving cLSGO tasks. A new algorithm has been proposed, which combined the success-history based parameter adaptation for differential evolution (SHADE) optimizer, iCC, and the ε-constrained method (namely ε-iCC-SHADE). We investigated the performance of the ε-iCC-SHADE and compared it with the previously proposed ε-CC-SHADE algorithm on scalable problems from the IEEE CEC 2017 Competition on constrained real-parameter optimization.
      Citation: Algorithms
      PubDate: 2020-04-26
      DOI: 10.3390/a13050108
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 109: Evolution of SOMs’ Structure and
           Learning Algorithm: From Visualization of High-Dimensional Data to
           Clustering of Complex Data

    • Authors: Marian B. Gorzałczany, Filip Rudziński
      First page: 109
      Abstract: In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are discussed. They are characterized in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets. We also show that the proposed original solutions, i.e., GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform alternative approaches in various complex clustering tasks by providing up to 20 % increase in the clustering accuracy. The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM’s generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated.
      Citation: Algorithms
      PubDate: 2020-04-28
      DOI: 10.3390/a13050109
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 110: p-Refined Multilevel Quasi-Monte Carlo for
           Galerkin Finite Element Methods with Applications in Civil Engineering

    • Authors: Philippe Blondeel, Pieterjan Robbe, Cédric Van hoorickx, Stijn François, Geert Lombaert, Stefan Vandewalle
      First page: 110
      Abstract: Civil engineering applications are often characterized by a large uncertainty on the material parameters. Discretization of the underlying equations is typically done by means of the Galerkin Finite Element method. The uncertain material parameter can be expressed as a random field represented by, for example, a Karhunen–Loève expansion. Computation of the stochastic responses, i.e., the expected value and variance of a chosen quantity of interest, remains very costly, even when state-of-the-art Multilevel Monte Carlo (MLMC) is used. A significant cost reduction can be achieved by using a recently developed multilevel method: p-refined Multilevel Quasi-Monte Carlo (p-MLQMC). This method is based on the idea of variance reduction by employing a hierarchical discretization of the problem based on a p-refinement scheme. It is combined with a rank-1 Quasi-Monte Carlo (QMC) lattice rule, which yields faster convergence compared to the use of random Monte Carlo points. In this work, we developed algorithms for the p-MLQMC method for two dimensional problems. The p-MLQMC method is first benchmarked on an academic beam problem. Finally, we use our algorithm for the assessment of the stability of slopes, a problem that arises in geotechnical engineering, and typically suffers from large parameter uncertainty. For both considered problems, we observe a very significant reduction in the amount of computational work with respect to MLMC.
      Citation: Algorithms
      PubDate: 2020-04-28
      DOI: 10.3390/a13050110
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 111: Multi-Level Joint Feature Learning for
           Person Re-Identification

    • Authors: Shaojun Wu, Ling Gao
      First page: 111
      Abstract: In person re-identification, extracting image features is an important step when retrieving pedestrian images. Most of the current methods only extract global features or local features of pedestrian images. Some inconspicuous details are easily ignored when learning image features, which is not efficient or robust to for scenarios with large differences. In this paper, we propose a Multi-level Feature Fusion model that combines both global features and local features of images through deep learning networks to generate more discriminative pedestrian descriptors. Specifically, we extract local features from different depths of network by the Part-based Multi-level Net to fuse low-to-high level local features of pedestrian images. Global-Local Branches are used to extract the local features and global features at the highest level. The experiments have proved that our deep learning model based on multi-level feature fusion works well in person re-identification. The overall results outperform the state of the art with considerable margins on three widely-used datasets. For instance, we achieve 96% Rank-1 accuracy on the Market-1501 dataset and 76.1% mAP on the DukeMTMC-reID dataset, outperforming the existing works by a large margin (more than 6%).
      Citation: Algorithms
      PubDate: 2020-04-29
      DOI: 10.3390/a13050111
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 112: Two NEH Heuristic Improvements for
           Flowshop Scheduling Problem with Makespan Criterion

    • Authors: Christophe Sauvey, Nathalie Sauer
      First page: 112
      Abstract: Since its creation by Nawaz, Enscore, and Ham in 1983, NEH remains the best heuristic method to solve flowshop scheduling problems. In the large body of literature dealing with the application of this heuristic, it can be clearly noted that results differ from one paper to another. In this paper, two methods are proposed to improve the original NEH, based on the two points in the method where choices must be made, in case of equivalence between two job orders or partial sequences. When an equality occurs in a sorting method, two results are equivalent, but can lead to different final results. In order to propose the first improvement to NEH, the factorial basis decomposition method is introduced, which makes a number computationally correspond to a permutation. This method is very helpful for the first improvement, and allows testing of all the sequencing possibilities for problems counting up to 50 jobs. The second improvement is located where NEH keeps the best partial sequence. Similarly, a list of equivalent partial sequences is kept, rather than only one, to provide the global method a chance of better performance. The results obtained with the successive use of the two methods of improvement present an average improvement of 19% over the already effective results of the original NEH method.
      Citation: Algorithms
      PubDate: 2020-04-29
      DOI: 10.3390/a13050112
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 113: Goal Oriented Time Adaptivity Using Local
           Error Estimates

    • Authors: Peter Meisrimel, Philipp Birken
      First page: 113
      Abstract: We consider initial value problems (IVPs) where we are interested in a quantity of interest (QoI) that is the integral in time of a functional of the solution. For these, we analyze goal oriented time adaptive methods that use only local error estimates. A local error estimate and timestep controller for step-wise contributions to the QoI are derived. We prove convergence of the error in the QoI for tolerance to zero under a controllability assumption. By analyzing global error propagation with respect to the QoI, we can identify possible issues and make performance predictions. Numerical tests verify these results. We compare performance with classical local error based time-adaptivity and a posteriori based adaptivity using the dual-weighted residual (DWR) method. For dissipative problems, local error based methods show better performance than DWR and the goal oriented method shows good results in most examples, with significant speedups in some cases.
      Citation: Algorithms
      PubDate: 2020-04-30
      DOI: 10.3390/a13050113
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 114: Automobile Fine-Grained Detection
           Algorithm Based on Multi-Improved YOLOv3 in Smart Streetlights

    • Authors: Yang, Yang, He, Fu, Jiang
      First page: 114
      Abstract: Upgrading ordinary streetlights to smart streetlights to help monitor traffic flow is a low-cost and pragmatic option for cities. Fine-grained classification of vehicles in the sight of smart streetlights is essential for intelligent transportation and smart cities. In order to improve the classification accuracy of distant cars, we propose a reformed YOLOv3 (You Only Look Once, version 3) algorithm to realize the detection of various types of automobiles, such as SUVs, sedans, taxis, commercial vehicles, small commercial vehicles, vans, buses, trucks and pickup trucks. Based on the dataset UA-DETRAC-LITE, manually labeled data is added to improve the data balance. First, data optimization for the vehicle target is performed to improve the generalization ability and position regression loss function of the model. The experimental results show that, within the range of 67 m, and through scale optimization (i.e., by introducing multi-scale training and anchor clustering), the classification accuracies of trucks and pickup trucks are raised by 26.98% and 16.54%, respectively, and the overall accuracy is increased by 8%. Secondly, label smoothing and mixup optimization is also performed to improve the generalization ability of the model. Compared with the original YOLO algorithm, the accuracy of the proposed algorithm is improved by 16.01%. By combining the optimization of the position regression loss function of GIOU (Generalized Intersection Over Union), the overall system accuracy can reach 92.7%, which improves the performance by 21.28% compared with the original YOLOv3 algorithm.
      Citation: Algorithms
      PubDate: 2020-05-02
      DOI: 10.3390/a13050114
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 115: A Fuzzy-Based Decision Support Model for
           Risk Maturity Evaluation of Construction Organizations

    • Authors: Roghabadi, Moselhi
      First page: 115
      Abstract: Risk maturity evaluation is an efficient tool which can assist construction organizations in the identification of their strengths and weaknesses in risk management processes and in taking necessary actions for the improvement of these processes. The accuracy of its results relies heavily on the quality of responses provided by participants specialized in these processes across the organization. Risk maturity models reported in the literature gave equal importance to participants’ responses during the model development, neglecting their level of authority in the organization as well as their level of expertise in risk management processes. Unlike the existing models, this paper presents a new risk maturity model that considers the relative importance of the responses provided by the participants in the model development. It considered their authority in the organization and their level of involvement in the risk management processes for calculating the relative weights associated with the risk maturity attributes. It employed an analytic network process (ANP) to model the interdependencies among the risk maturity attributes and utilizes the fuzzy set theory to incorporate the uncertainty associated with the ambiguity of the responses used in the model development. The developed model allows the construction organizations to have a more accurate and realistic view of their current performance in risk management processes. The application of the developed model was investigated by measuring the risk maturity level of an industrial partner working on civil infrastructure projects in Canada.
      Citation: Algorithms
      PubDate: 2020-05-02
      DOI: 10.3390/a13050115
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 116: The Expected Utility Insurance Premium
           Principle with Fourth-Order Statistics: Does It Make a Difference'

    • Authors: Alessandro Mazzoccoli, Maurizio Naldi
      First page: 116
      Abstract: The expected utility principle is often used to compute the insurance premium through a second-order approximation of the expected value of the utility of losses. We investigate the impact of using a more accurate approximation based on the fourth-order statistics of the expected loss and derive the premium under this expectedly more accurate approximation. The comparison between the two approximation levels shows that the second-order-based premium is always lower (i.e., an underestimate of the correct one) for the commonest loss distributions encountered in insurance. The comparison is also carried out for real cases, considering the loss parameters values estimated in the literature. The increased risk of the insurer is assessed through the Value-at-Risk.
      Citation: Algorithms
      PubDate: 2020-05-06
      DOI: 10.3390/a13050116
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 117: A Novel Hybrid Metaheuristic Algorithm for
           Optimization of Construction Management Site Layout Planning

    • Authors: Prayogo, Cheng, Wu, Redi, Yu, Persada, Nadlifatin
      First page: 117
      Abstract: Symbiotic organisms search (SOS) is a promising metaheuristic algorithm that has been studied recently by numerous researchers due to its capability to solve various hard and complex optimization problems. SOS is a powerful optimization technique that mimics the simulation of the typical symbiotic interactions among organisms in an ecosystem. This study presents a new SOS-based hybrid algorithm for solving the challenging construction site layout planning (CSLP) discrete problems. A new algorithm called the hybrid symbiotic organisms search with local operators (HSOS-LO) represents a combination of the canonical SOS and several local search mechanisms aimed at increasing the searching capability in discrete-based solution space. In this study, three CSLP problems that consist of single and multi-floor facility layout problems are tested, and the obtained results were compared with other widely used metaheuristic algorithms. The results indicate the robust performance of the HSOS-LO algorithm in handling discrete-based CSLP problems.
      Citation: Algorithms
      PubDate: 2020-05-06
      DOI: 10.3390/a13050117
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 118: Distributional Reinforcement Learning with
           Ensembles

    • Authors: Björn Lindenberg Jonas Nordqvist, Karl-Olof Lindahl
      First page: 118
      Abstract: It is well known that ensemble methods often provide enhanced performance in reinforcement learning. In this paper, we explore this concept further by using group-aided training within the distributional reinforcement learning paradigm. Specifically, we propose an extension to categorical reinforcement learning, where distributional learning targets are implicitly based on the total information gathered by an ensemble. We empirically show that this may lead to much more robust initial learning, a stronger individual performance level, and good efficiency on a per-sample basis.
      Citation: Algorithms
      PubDate: 2020-05-07
      DOI: 10.3390/a13050118
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 119: Forecasting Electricity Prices: a Machine
           Learning Approach

    • Authors: Castelli, Groznik, Popovič
      First page: 119
      Abstract: The electricity market is a complex, evolutionary, and dynamic environment. Forecasting electricity prices is an important issue for all electricity market participants. In this study, we shed light on how to improve electricity price forecasting accuracy through the use of a machine learning technique—namely, a novel genetic programming approach. Drawing on empirical data from the largest EU energy markets, we propose a forecasting model that considers variables related to weather conditions, oil prices, and CO2 coupons and predicts energy prices 24 hours ahead. We show that the proposed model provides more accurate predictions of future electricity prices than existing prediction methods. Our important findings will assist the electricity market participants in forecasting future price movements.
      Citation: Algorithms
      PubDate: 2020-05-08
      DOI: 10.3390/a13050119
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 120: A Novel Data-Driven Magnetic Resonance
           Spectroscopy Signal Analysis Framework to Quantify Metabolite
           Concentration

    • Authors: Omid Bazgir, Eric Walden, Brian Nutter, Sunanda Mitra
      First page: 120
      Abstract: Developing tools for precise quantification of brain metabolites using magnetic resonance spectroscopy (MRS) is an active area of research with broad application in non-invasive neurodegenerative disease studies. The tools are mainly developed based on black box (data-driven), or basis sets approaches. In this study, we offer a multi-stage framework that integrates data-driven and basis sets methods. We first use truncated Hankel singular value decomposition (HSVD) to decompose free induction decay (FID) signals into single tone FIDs, as the data-driven stage. Subsequently, single tone FIDs are clustered into basis sets while using initialized K-means with prior knowledge of the metabolites, as the basis set stage. The generated basis sets are fitted with the magnetic resonance (MR) spectra while using a linear constrained least square, and then the metabolite concentration is calculated. Prior to using our proposed multi-stage approach, a sequence of preprocessing blocks: water peak removal, phase correction, and baseline correction (developed in house) are used.
      Citation: Algorithms
      PubDate: 2020-05-10
      DOI: 10.3390/a13050120
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 121: Ensemble Deep Learning Models for
           Forecasting Cryptocurrency Time-Series

    • Authors: Ioannis E. Livieris, Emmanuel Pintelas, Stavros Stavroyiannis, Panagiotis Pintelas
      First page: 121
      Abstract: Nowadays, cryptocurrency has infiltrated almost all financial transactions; thus, it is generally recognized as an alternative method for paying and exchanging currency. Cryptocurrency trade constitutes a constantly increasing financial market and a promising type of profitable investment; however, it is characterized by high volatility and strong fluctuations of prices over time. Therefore, the development of an intelligent forecasting model is considered essential for portfolio optimization and decision making. The main contribution of this research is the combination of three of the most widely employed ensemble learning strategies: ensemble-averaging, bagging and stacking with advanced deep learning models for forecasting major cryptocurrency hourly prices. The proposed ensemble models were evaluated utilizing state-of-the-art deep learning models as component learners, which were comprised by combinations of long short-term memory (LSTM), Bi-directional LSTM and convolutional layers. The ensemble models were evaluated on prediction of the cryptocurrency price on the following hour (regression) and also on the prediction if the price on the following hour will increase or decrease with respect to the current price (classification). Additionally, the reliability of each forecasting model and the efficiency of its predictions is evaluated by examining for autocorrelation of the errors. Our detailed experimental analysis indicates that ensemble learning and deep learning can be efficiently beneficial to each other, for developing strong, stable, and reliable forecasting models.
      Citation: Algorithms
      PubDate: 2020-05-10
      DOI: 10.3390/a13050121
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 122: Incremental FPT Delay

    • Authors: Arne Meier
      First page: 122
      Abstract: In this paper, we study the relationship of parameterized enumeration complexity classes defined by Creignou et al. (MFCS 2013). Specifically, we introduce two hierarchies (IncFPTa and CapIncFPTa) of enumeration complexity classes for incremental fpt-time in terms of exponent slices and show how they interleave. Furthermore, we define several parameterized function classes and, in particular, introduce the parameterized counterpart of the class of nondeterministic multivalued functions with values that are polynomially verifiable and guaranteed to exist, TFNP, known from Megiddo and Papadimitriou (TCS 1991). We show that this class TF(para-NP), the restriction of the function variant of NP to total functions, collapsing to F(FPT), the function variant of FPT, is equivalent to the result that OutputFPT coincides with IncFPT. In addition, these collapses are shown to be equivalent to TFNP = FP, and also equivalent to P equals NP intersected with coNP. Finally, we show that these two collapses are equivalent to the collapse of IncP and OutputP in the classical setting. These results are the first direct connections of collapses in parameterized enumeration complexity to collapses in classical enumeration complexity, parameterized function complexity, classical function complexity, and computational complexity theory.
      Citation: Algorithms
      PubDate: 2020-05-15
      DOI: 10.3390/a13050122
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 123: Mining Sequential Patterns with
           VC-Dimension and Rademacher Complexity

    • Authors: Diego Santoro, Andrea Tonon, Fabio Vandin
      First page: 123
      Abstract: Sequential pattern mining is a fundamental data mining task with application in several domains. We study two variants of this task—the first is the extraction of frequent sequential patterns, whose frequency in a dataset of sequential transactions is higher than a user-provided threshold; the second is the mining of true frequent sequential patterns, which appear with probability above a user-defined threshold in transactions drawn from the generative process underlying the data. We present the first sampling-based algorithm to mine, with high confidence, a rigorous approximation of the frequent sequential patterns from massive datasets. We also present the first algorithms to mine approximations of the true frequent sequential patterns with rigorous guarantees on the quality of the output. Our algorithms are based on novel applications of Vapnik-Chervonenkis dimension and Rademacher complexity, advanced tools from statistical learning theory, to sequential pattern mining. Our extensive experimental evaluation shows that our algorithms provide high-quality approximations for both problems we consider.
      Citation: Algorithms
      PubDate: 2020-05-18
      DOI: 10.3390/a13050123
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 124: A Novel Method for Inference of Chemical
           Compounds of Cycle Index Two with Desired Properties Based on Artificial
           Neural Networks and Integer Programming

    • Authors: Jianshen Zhu, Chenxi Wang, Aleksandar Shurbevski, Hiroshi Nagamochi, Tatsuya Akutsu
      First page: 124
      Abstract: Inference of chemical compounds with desired properties is important for drug design, chemo-informatics, and bioinformatics, to which various algorithmic and machine learning techniques have been applied. Recently, a novel method has been proposed for this inference problem using both artificial neural networks (ANN) and mixed integer linear programming (MILP). This method consists of the training phase and the inverse prediction phase. In the training phase, an ANN is trained so that the output of the ANN takes a value nearly equal to a given chemical property for each sample. In the inverse prediction phase, a chemical structure is inferred using MILP and enumeration so that the structure can have a desired output value for the trained ANN. However, the framework has been applied only to the case of acyclic and monocyclic chemical compounds so far. In this paper, we significantly extend the framework and present a new method for the inference problem for rank-2 chemical compounds (chemical graphs with cycle index 2). The results of computational experiments using such chemical properties as octanol/water partition coefficient, melting point, and boiling point suggest that the proposed method is much more useful than the previous method.
      Citation: Algorithms
      PubDate: 2020-05-18
      DOI: 10.3390/a13050124
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 125: Moving Deep Learning to the Edge

    • Authors: Mário P. Véstias, Rui Policarpo Duarte, José T. de Sousa, Horácio C. Neto
      First page: 125
      Abstract: Deep learning is now present in a wide range of services and applications, replacing and complementing other machine learning algorithms. Performing training and inference of deep neural networks using the cloud computing model is not viable for applications where low latency is required. Furthermore, the rapid proliferation of the Internet of Things will generate a large volume of data to be processed, which will soon overload the capacity of cloud servers. One solution is to process the data at the edge devices themselves, in order to alleviate cloud server workloads and improve latency. However, edge devices are less powerful than cloud servers, and many are subject to energy constraints. Hence, new resource and energy-oriented deep learning models are required, as well as new computing platforms. This paper reviews the main research directions for edge computing deep learning algorithms.
      Citation: Algorithms
      PubDate: 2020-05-18
      DOI: 10.3390/a13050125
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 126: PUB-SalNet: A Pre-Trained Unsupervised
           Self-Aware Backpropagation Network for Biomedical Salient Segmentation

    • Authors: Feiyang Chen, Ying Jiang, Xiangrui Zeng, Jing Zhang, Xin Gao, Min Xu
      First page: 126
      Abstract: Salient segmentation is a critical step in biomedical image analysis, aiming to cut out regions that are most interesting to humans. Recently, supervised methods have achieved promising results in biomedical areas, but they depend on annotated training data sets, which requires labor and proficiency in related background knowledge. In contrast, unsupervised learning makes data-driven decisions by obtaining insights directly from the data themselves. In this paper, we propose a completely unsupervised self-aware network based on pre-training and attentional backpropagation for biomedical salient segmentation, named as PUB-SalNet. Firstly, we aggregate a new biomedical data set from several simulated Cellular Electron Cryo-Tomography (CECT) data sets featuring rich salient objects, different SNR settings, and various resolutions, which is called SalSeg-CECT. Based on the SalSeg-CECT data set, we then pre-train a model specially designed for biomedical tasks as a backbone module to initialize network parameters. Next, we present a U-SalNet network to learn to selectively attend to salient objects. It includes two types of attention modules to facilitate learning saliency through global contrast and local similarity. Lastly, we jointly refine the salient regions together with feature representations from U-SalNet, with the parameters updated by self-aware attentional backpropagation. We apply PUB-SalNet for analysis of 2D simulated and real images and achieve state-of-the-art performance on simulated biomedical data sets. Furthermore, our proposed PUB-SalNet can be easily extended to 3D images. The experimental results on the 2d and 3d data sets also demonstrate the generalization ability and robustness of our method.
      Citation: Algorithms
      PubDate: 2020-05-19
      DOI: 10.3390/a13050126
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 127: The Effect of Different Deep Network
           Architectures upon CNN-Based Gaze Tracking

    • Authors: Hui-Hui Chen, Bor-Jiunn Hwang, Jung-Shyr Wu, Po-Ting Liu
      First page: 127
      Abstract: In this paper, we explore the effect of using different convolutional layers, batch normalization and the global average pooling layer upon a convolutional neural network (CNN) based gaze tracking system. A novel method is proposed to label the participant’s face images as gaze points retrieved from eye tracker while watching videos for building a training dataset that is closer to human visual behavior. The participants can swing their head freely; therefore, the most real and natural images can be obtained without too many restrictions. The labeled data are classified according to the coordinate of gaze and area of interest on the screen. Therefore, varied network architectures are applied to estimate and compare the effects including the number of convolutional layers, batch normalization (BN) and the global average pooling (GAP) layer instead of the fully connected layer. Three schemes, including the single eye image, double eyes image and facial image, with data augmentation are used to feed into neural network to train and evaluate the efficiency. The input image of the eye or face for an eye tracking system is mostly a small-sized image with relatively few features. The results show that BN and GAP are helpful in overcoming the problem to train models and in reducing the amount of network parameters. It is shown that the accuracy is significantly improved when using GAP and BN at the mean time. Overall, the face scheme has a highest accuracy of 0.883 when BN and GAP are used at the mean time. Additionally, comparing to the fully connected layer set to 512 cases, the number of parameters is reduced by less than 50% and the accuracy is improved by about 2%. A detection accuracy comparison of our model with the existing George and Routray methods shows that our proposed method achieves better prediction accuracy of more than 6%.
      Citation: Algorithms
      PubDate: 2020-05-19
      DOI: 10.3390/a13050127
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 128: Change-Point Detection in Autoregressive
           Processes via the Cross-Entropy Method

    • Authors: Lijing Ma, Georgy Sofronov
      First page: 128
      Abstract: It is very often the case that at some moment a time series process abruptly changes its underlying structure and, therefore, it is very important to accurately detect such change-points. In this problem, which is called a change-point (or break-point) detection problem, we need to find a method that divides the original nonstationary time series into a piecewise stationary segments. In this paper, we develop a flexible method to estimate the unknown number and the locations of change-points in autoregressive time series. In order to find the optimal value of a performance function, which is based on the Minimum Description Length principle, we develop a Cross-Entropy algorithm for the combinatorial optimization problem. Our numerical experiments show that the proposed approach is very efficient in detecting multiple change-points when the underlying process has moderate to substantial variations in the mean and the autocorrelation coefficient. We also apply the proposed method to real data of daily AUD/CNY exchange rate series from 2 January 2018 to 24 March 2020.
      Citation: Algorithms
      PubDate: 2020-05-20
      DOI: 10.3390/a13050128
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 129: Image Resolution Enhancement of Highly
           Compressively Sensed CT/PET Signals

    • Authors: Krzysztof Malczewski
      First page: 129
      Abstract: One of the most challenging aspects of medical modalities such as Computed Tomography (CT) as well hybrid techniques such as CT/PET (Computed Tomography/Positron emission tomography) and PET/MRI is finding a balance between examination time, radiation dose, and image quality. The need for a dense sampling grid is associated with two major factors: image resolution enhancement, which leads to a strengthening of human perception, and image features interpretation. All these aspects make an unsupervised image processing much easier. The presented algorithm employs super-resolution-reconstruction with high accuracy motion fields’ estimation at its core for Computed Tomography/Positron Emission Tomography (CT/PET) images enhancement. The suggested method starts with processing compressively sensed input signals. This paper shows that it is possible to achieve higher image resolution while keeping the same radiation dose. The purpose of this paper is to propose a highly effective CT/PET image reconstruction strategy, allowing for simultaneous resolution enhancing and scanning time minimisation. The algorithm aims to overcome two major obstacles—image resolution limitation and algorithm reconstruction time efficiency-by combining a highly-sparse Ridgelet analysis based sampling pattern as well as PET signal sensing with super-resolution (SR) image enhancement. Due to the diverse nature of Computed Tomography, the applied Ridgelet analysis arguing its usability turned out to be efficient in reducing acquisition times in regard to maintaining satisfying scan quality. This paper presents a super-resolution image enhancement algorithm designed for handling highly sensitively compressed hybrid CT/PET scanners raw data. The presented technique allows for improving image resolution while reducing motion artefacts and keeping scanning times at pretty low levels.
      Citation: Algorithms
      PubDate: 2020-05-21
      DOI: 10.3390/a13050129
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 130: Uncertainty Quantification Approach on
           Numerical Simulation for Supersonic Jets Performance

    • Authors: Carlo Cravero, Davide De Domenico, Andrea Ottonello
      First page: 130
      Abstract: One of the main issues addressed in any engineering design problem is to predict the performance of the component or system as accurately and realistically as possible, taking into account the variability of operating conditions or the uncertainty on input data (boundary conditions or geometry tolerance). In this paper, the propagation of uncertainty on boundary conditions through a numerical model of supersonic nozzle is investigated. The evaluation of the statistics of the problem response functions is performed following ‘Surrogate-Based Uncertainty Quantification’. The approach involves: (a) the generation of a response surface starting from a DoE in order to approximate the convergent–divergent ‘physical’ model (expensive to simulate), (b) the application of the UQ technique based on the LHS to the meta-model. Probability Density Functions are introduced for the inlet boundary conditions in order to quantify their effects on the output nozzle performance. The physical problem considered is very relevant for the experimental tests on the UQ approach because of its high non-linearity. A small perturbation to the input data can drive the solution to a completely different output condition. The CFD simulations and the Uncertainty Quantification were performed by coupling the open source Dakota platform with the ANSYS Fluent® CFD commercial software: the process is automated through scripting. The procedure adopted in this work demonstrate the applicability of advanced simulation techniques (such as UQ analysis) to industrial technical problems. Moreover, the analysis highlights the practical use of the uncertainty quantification techniques in predicting the performance of a nozzle design affected by off-design conditions with fluid-dynamic complexity due to strong nonlinearity.
      Citation: Algorithms
      PubDate: 2020-05-22
      DOI: 10.3390/a13050130
      Issue No: Vol. 13, No. 5 (2020)
       
  • Algorithms, Vol. 13, Pages 92: Deterministic Coresets for k-Means of
           Big Sparse Data

    • Authors: Artem Barger, Dan Feldman
      First page: 92
      Abstract: Let P be a set of n points in R d , k ≥ 1 be an integer and ε ∈ ( 0 , 1 ) be a constant. An ε-coreset is a subset C ⊆ P with appropriate non-negative weights (scalars), that approximates any given set Q ⊆ R d of k centers. That is, the sum of squared distances over every point in P to its closest point in Q is the same, up to a factor of 1 ± ε to the weighted sum of C to the same k centers. If the coreset is small, we can solve problems such as k-means clustering or its variants (e.g., discrete k-means, where the centers are restricted to be in P, or other restricted zones) on the small coreset to get faster provable approximations. Moreover, it is known that such coreset support streaming, dynamic and distributed data using the classic merge-reduce trees. The fact that the coreset is a subset implies that it preserves the sparsity of the data. However, existing such coresets are randomized and their size has at least linear dependency on the dimension d. We suggest the first such coreset of size independent of d. This is also the first deterministic coreset construction whose resulting size is not exponential in d. Extensive experimental results and benchmarks are provided on public datasets, including the first coreset of the English Wikipedia using Amazon’s cloud.
      Citation: Algorithms
      PubDate: 2020-04-14
      DOI: 10.3390/a13040092
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 93: Path Planning for Laser Cladding Robot on
           Artificial Joint Surface Based on Topology Reconstruction

    • Authors: Yuanjin Li, Tao Chen, Defu Liu
      First page: 93
      Abstract: Artificial joint surface coating is a hot issue in the interdisciplinary fields of manufacturing, materials and biomedicine. Due to the complex surface characteristics of artificial joints, there are some problems with efficiency and precision in automatic cladding path planning for coating fabrication. In this study, a path planning method for a laser cladding robot for artificial joints surface was proposed. The key of this method was the topological reconstruction of the artificial joint surface. On the basis of the topological relation, a set of parallel planes were used to intersect the CAD model to generate a set of continuous, directed and equidistant surface transversals on the artificial joint surface. The arch height error method was used to extract robot interpolation points from surface transversal lines according to machining accuracy requirements. The coordinates and normal vectors of interpolation points were used to calculate the position and pose of the robot tool center point (TCP). To ensure that the laser beam was always perpendicular to the artificial joint surface, a novel laser cladding set-up with a robot was designed, of which the joint part clamped by a six-axis robot moved while the laser head was fixed on the workbench. The proposed methodology was validated with the planned path on the surface of an artificial acetabular cup using simulation and experimentation via an industrial NACHI robot. The results indicated that the path planning method based on topological reconstruction was feasible and more efficient than the traditional robot teaching method.
      Citation: Algorithms
      PubDate: 2020-04-15
      DOI: 10.3390/a13040093
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 94: Stochastic Models to Qualify Stem Tapers

    • Authors: Petrauskas, Rupšys, Narmontas, Aleinikovas, Beniušienė, Šilinskas
      First page: 94
      Abstract: This study examines the performance of 11 tree taper models to predict the diameter of bark at any given height and the total stem volume of eight dominant tree species in the boreal forests of Lithuania. Here, we develop eight new models using stochastic differential equations (SDEs). The symmetrical Vasicek model and asymmetrical Gompertz model are used to describe tree taper evolution, as well as geometric-type diffusion processes. These models are compared with those traditionally used for four tree taper models by using performance statistics and residual analysis. The observed dataset consists of longitudinal measurements of 3703 trees, representing the eight dominant tree species in Lithuania (pine, spruce, oak, ash, birch, black alder, white alder, and aspen). Overall, the best goodness of fit statistics of diameter predictions produced the SDE taper models. All results have been implemented in the Maple computer algebra system using the “Statistics” and “VectorCalculus” packages.
      Citation: Algorithms
      PubDate: 2020-04-15
      DOI: 10.3390/a13040094
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 95: How to Identify Varying Lead–Lag Effects
           in Time Series Data: Implementation, Validation, and Application of the
           Generalized Causality Algorithm

    • Authors: Johannes Stübinger, Katharina Adler
      First page: 95
      Abstract: This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This procedure allows an elastic adjustment of the time axis to find similar but phase-shifted sequences—structural breaks in their relationship are also captured. A large-scale simulation study validates the outperformance in the vast majority of parameter constellations in terms of efficiency, robustness, and feasibility. Finally, the presented methodology is applied to real data from the areas of macroeconomics, finance, and metal. Highest similarity show the pairs of gross domestic product and consumer price index (macroeconomics), S&P 500 index and Deutscher Aktienindex (finance), as well as gold and silver (metal). In addition, the algorithm takes full use of its flexibility and identifies both various structural breaks and regime patterns over time, which are (partly) well documented in the literature.
      Citation: Algorithms
      PubDate: 2020-04-16
      DOI: 10.3390/a13040095
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 96: A Hybrid Grasshopper Optimization Algorithm
           Applied to the Open Vehicle Routing Problem

    • Authors: Valeria Soto-Mendoza, Irma García-Calvillo, Efraín Ruiz-y-Ruiz, Jaime Pérez-Terrazas
      First page: 96
      Abstract: This paper presents a hybrid grasshopper optimization algorithm using a novel decoder and local search to solve instances of the open vehicle routing problem with capacity and distance constraints. The algorithm’s decoder first defines the number of vehicles to be used and then it partitions the clients, assigning them to the available routes. The algorithm performs a local search in three neighborhoods after decoding. When a new best solution is found, every route is locally optimized by solving a traveling salesman problem, considering the depot and clients in the route. Three sets containing a total of 30 benchmark problems from the literature were used to test the algorithm. The experiments considered two cases of the problem. In the first, the primary objective is to minimize the total number of vehicles and then the total distance to be traveled. In the second case, the total distance traveled by the vehicles is minimized. The obtained results showed the algorithm’s proficient performance. For the first case, the algorithm was able to improve or match the best-known solutions for 21 of the 30 benchmark problems. For the second case, the best-known solutions for 18 of the 30 benchmark problems were found or improved by the algorithm. Finally, a case study from a real-life problem is included.
      Citation: Algorithms
      PubDate: 2020-04-16
      DOI: 10.3390/a13040096
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 97: Performance Assessment of Predictive
           Control— A Survey

    • Authors: Paweł D. Domański
      First page: 97
      Abstract: Model Predictive Control constitutes an important element of any modern control system. There is growing interest in this technology. More and more advanced predictive structures have been implemented. The first applications were in chemical engineering, and now Model Predictive Control can be found in almost all kinds of applications, from the process industry to embedded control systems or for autonomous objects. Currently, each implementation of a control system requires strict financial justification. Application engineers need tools to measure and quantify the quality of the control and the potential for improvement that may be achieved by retrofitting control systems. Furthermore, a successful implementation of predictive control must conform to prior estimations not only during commissioning, but also during regular daily operations. The system must sustain the quality of control performance. The assessment of Model Predictive Control requires a suitable, often specific, methodology and comparative indicators. These demands establish the rationale of this survey. Therefore, the paper collects and summarizes control performance assessment methods specifically designed for and utilized in predictive control. These observations present the picture of the assessment technology. Further generalization leads to the formulation of a control assessment procedure to support control application engineers.
      Citation: Algorithms
      PubDate: 2020-04-17
      DOI: 10.3390/a13040097
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 98: Numerical Simulation of Non-Linear Models
           of Reaction—Diffusion for a DGT Sensor

    • Authors: Joan Cecilia Averós, Jaume Puy Llorens, Ramiro Uribe-Kaffure
      First page: 98
      Abstract: In this work, we present a novel strategy for the numerical solution of a coupled system of partial differential equations that describe reaction–diffusion processes of a mixture of metals and ligands that can be absorbed by a sensor or a microorganism, in an aqueous medium. The novelty introduced in this work consisted of an adequate database management in conjunction with a direct iterative schema, which allowed the construction of simple, fast and efficient algorithms. Except in really adverse conditions, the calculation is converging and satisfactory solutions were reached. Computing times showed to be better than those obtained with some commercial programs. Although we concentrate on the solution for a particular system (Diffusive Gradients in Thin Films [DGT] sensors), the proposed algorithm does not require major modifications to consider new theoretical or experimental configurations. Since the quality of numerical simulations of reaction–diffusion problems often faces some drawbacks as the values of reaction rate constants increase, some additional effort has been invested in obtaining proper solutions in those cases.
      Citation: Algorithms
      PubDate: 2020-04-20
      DOI: 10.3390/a13040098
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 99: A New Lossless DNA Compression Algorithm
           Based on A Single-Block Encoding Scheme

    • Authors: Deloula Mansouri, Xiaohui Yuan, Abdeldjalil Saidani
      First page: 99
      Abstract: With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge for bioinformatics. Therefore, to overcome these challenges, compression has become necessary. In this paper, we describe a new reference-free DNA compressor abbreviated as DNAC-SBE. DNAC-SBE is a lossless hybrid compressor that consists of three phases. First, starting from the largest base (Bi), the positions of each Bi are replaced with ones and the positions of other bases that have smaller frequencies than Bi are replaced with zeros. Second, to encode the generated streams, we propose a new single-block encoding scheme (SEB) based on the exploitation of the position of neighboring bits within the block using two different techniques. Finally, the proposed algorithm dynamically assigns the shorter length code to each block. Results show that DNAC-SBE outperforms state-of-the-art compressors and proves its efficiency in terms of special conditions imposed on compressed data, storage space and data transfer rate regardless of the file format or the size of the data.
      Citation: Algorithms
      PubDate: 2020-04-20
      DOI: 10.3390/a13040099
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 100: A Survey of Low-Rank Updates of
           Preconditioners for Sequences of Symmetric Linear Systems

    • Authors: Luca Bergamaschi
      First page: 100
      Abstract: The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , … arising in many scientific applications, such as discretization of transient Partial Differential Equations (PDEs), solution of eigenvalue problems, (Inexact) Newton methods applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. In this paper, we will analyze a number of techniques of updating a given initial preconditioner by a low-rank matrix with the aim of improving the clustering of eigenvalues around 1, in order to speed-up the convergence of the Preconditioned Conjugate Gradient (PCG) method. We will also review some techniques to efficiently approximate the linearly independent vectors which constitute the low-rank corrections and whose choice is crucial for the effectiveness of the approach. Numerical results on real-life applications show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates.
      Citation: Algorithms
      PubDate: 2020-04-21
      DOI: 10.3390/a13040100
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 101: A New Way to Store Simple Text Files

    • Authors: Marcin Lawnik, Artur Pełka, Adrian Kapczyński
      First page: 101
      Abstract: In the era of ubiquitous digitization, the Internet of Things (IoT), information plays a vital role. All types of data are collected, and some of this data are stored as text files. An important aspect—regardless of the type of data—is related to file storage, especially the amount of disk space that is required. The less space is used on storing data sets, the lower is the cost of this service. Another important aspect of storing data warehouses in the form of files is the cost of data transmission needed for file transfer and its processing. Moreover, the data that are stored should be minimally protected against access and reading by other entities. The aspects mentioned above are particularly important for large data sets like Big Data. Considering the above criteria, i.e., minimizing storage space, data transfer, ensuring minimum security, the main goal of the article was to show the new way of storing text files. This article presents a method that converts data from text files like txt, json, html, py to images (image files) in png format. Taking into account such criteria as the output size of the file, the results obtained for the test files confirm that presented method enables to reduce the need for disk space, as well as to hide data in an image file. The described method can be used for texts saved in extended ASCII and UTF-8 coding.
      Citation: Algorithms
      PubDate: 2020-04-22
      DOI: 10.3390/a13040101
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 102: A Case Study for a Big Data and Machine
           Learning Platform to Improve Medical Decision Support in Population Health
           Management

    • Authors: Fernando López-Martínez, Edward Rolando Núñez-Valdez, Vicente García-Díaz, Zoran Bursac
      First page: 102
      Abstract: Big data and artificial intelligence are currently two of the most important and trending pieces for innovation and predictive analytics in healthcare, leading the digital healthcare transformation. Keralty organization is already working on developing an intelligent big data analytic platform based on machine learning and data integration principles. We discuss how this platform is the new pillar for the organization to improve population health management, value-based care, and new upcoming challenges in healthcare. The benefits of using this new data platform for community and population health include better healthcare outcomes, improvement of clinical operations, reducing costs of care, and generation of accurate medical information. Several machine learning algorithms implemented by the authors can use the large standardized datasets integrated into the platform to improve the effectiveness of public health interventions, improving diagnosis, and clinical decision support. The data integrated into the platform come from Electronic Health Records (EHR), Hospital Information Systems (HIS), Radiology Information Systems (RIS), and Laboratory Information Systems (LIS), as well as data generated by public health platforms, mobile data, social media, and clinical web portals. This massive volume of data is integrated using big data techniques for storage, retrieval, processing, and transformation. This paper presents the design of a digital health platform in a healthcare organization in Colombia to integrate operational, clinical, and business data repositories with advanced analytics to improve the decision-making process for population health management.
      Citation: Algorithms
      PubDate: 2020-04-23
      DOI: 10.3390/a13040102
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 103: Practical Grammar Compression Based on
           Maximal Repeats †

    • Authors: Isamu Furuya, Takuya Takagi, Yuto Nakashima, Shunsuke Inenaga, Hideo Bannai, Takuya Kida
      First page: 103
      Abstract: This study presents an analysis of RePair, which is a grammar compression algorithm known for its simple scheme,while also being practically effective. First, we show that the main process of RePair, that is, the step by step substitution of the most frequent symbol pairs, works within the corresponding most frequent maximal repeats. Then, we reveal the relation between maximal repeats and grammars constructed by RePair. On the basis of this analysis, we further propose a novel variant of RePair, called MR-RePair, which considers the one-time substitution of the most frequent maximal repeats instead of the consecutive substitution of the most frequent pairs. The results of the experiments comparing the size of constructed grammars and execution time of RePair and MR-RePair on several text corpora demonstrate that MR-RePair constructs more compact grammars than RePair does, especially for highly repetitive texts.
      Citation: Algorithms
      PubDate: 2020-04-23
      DOI: 10.3390/a13040103
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 104: Decision Support System for Fitting and
           Mapping Nonlinear Functions with Application to Insect Pest Management in
           the Biological Control Context

    • Authors: Guimapi, Mohamed, Biber-Freudenberger, Mwangi, Ekesi, Borgemeister, Tonnang
      First page: 104
      Abstract: The process of moving from experimental data to modeling and characterizing the dynamics and interactions in natural processes is a challenging task. This paper proposes an interactive platform for fitting data derived from experiments to mathematical expressions and carrying out spatial visualization. The platform is designed using a component-based software architectural approach, implemented in R and the Java programming languages. It uses experimental data as input for model fitting, then applies the obtained model at the landscape level via a spatial temperature grid data to yield regional and continental maps. Different modules and functionalities of the tool are presented with a case study, in which the tool is used to establish a temperature-dependent virulence model and map the potential zone of efficacy of a fungal-based biopesticide. The decision support system (DSS) was developed in generic form, and it can be used by anyone interested in fitting mathematical equations to experimental data collected following the described protocol and, depending on the type of investigation, it offers the possibility of projecting the model at the landscape level.
      Citation: Algorithms
      PubDate: 2020-04-24
      DOI: 10.3390/a13040104
      Issue No: Vol. 13, No. 4 (2020)
       
  • Algorithms, Vol. 13, Pages 105: Variational Specific Mode Extraction: A
           Novel Method for Defect Signal Detection of Ferromagnetic Pipeline

    • Authors: Ju, Wang, Zhao
      First page: 105
      Abstract: The non-contact detection of buried ferromagnetic pipeline is a long-standing problem in the field of inspection of outside pipelines, and the extraction of magnetic anomaly signal is a prerequisite for accurate detection. Pipeline defects can cause the fluctuation of magnetic signals, which are easily submerged in wide-band background noise without external excitation sources. Previously, Variational Mode Decomposition (VMD) was used to separate modal components; however, VMD is based on narrow-band signal processing algorithm and the calculation is complex. In this article, a method of pipeline defect signal based on Variational Specific Mode Extraction (VSME) is employed to extract the signal of a specific central frequency by signal modal decomposition, i.e., the specific mode is weak magnetic anomaly signal of pipeline defects. VSME is based on the fact that a wide-band signal can be converted into a narrow-band signal by demodulation method. Furthermore, the problem of wide-band signal decomposition is expressed as an optimal demodulation problem, which can be solved by alternating direction method of multipliers. The proposed algorithm is verified by artificially synthesized signals, and its performance is better than that of VMD. The results showed that the VSME method can extract the magnetic anomaly signal of pipeline damage using experimental data, while obtaining a better accuracy.
      Citation: Algorithms
      PubDate: 2020-04-24
      DOI: 10.3390/a13040105
      Issue No: Vol. 13, No. 4 (2020)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 54.236.59.154
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-