for Journals by Title or ISSN
for Articles by Keywords
  Subjects -> COMPUTER SCIENCE (Total: 2122 journals)
    - ANIMATION AND SIMULATION (31 journals)
    - ARTIFICIAL INTELLIGENCE (105 journals)
    - AUTOMATION AND ROBOTICS (105 journals)
    - COMPUTER ARCHITECTURE (10 journals)
    - COMPUTER ENGINEERING (11 journals)
    - COMPUTER GAMES (21 journals)
    - COMPUTER PROGRAMMING (26 journals)
    - COMPUTER SCIENCE (1231 journals)
    - COMPUTER SECURITY (50 journals)
    - DATA BASE MANAGEMENT (14 journals)
    - DATA MINING (38 journals)
    - E-BUSINESS (22 journals)
    - E-LEARNING (30 journals)
    - IMAGE AND VIDEO PROCESSING (40 journals)
    - INFORMATION SYSTEMS (107 journals)
    - INTERNET (96 journals)
    - SOCIAL WEB (53 journals)
    - SOFTWARE (34 journals)
    - THEORY OF COMPUTING (9 journals)

COMPUTER SCIENCE (1231 journals)                  1 2 3 4 5 6 7 | Last

Showing 1 - 200 of 872 Journals sorted alphabetically
3D Printing and Additive Manufacturing     Full-text available via subscription   (Followers: 24)
Abakós     Open Access   (Followers: 4)
ACM Computing Surveys     Hybrid Journal   (Followers: 31)
ACM Journal on Computing and Cultural Heritage     Hybrid Journal   (Followers: 8)
ACM Journal on Emerging Technologies in Computing Systems     Hybrid Journal   (Followers: 17)
ACM Transactions on Accessible Computing (TACCESS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 15)
ACM Transactions on Applied Perception (TAP)     Hybrid Journal   (Followers: 5)
ACM Transactions on Architecture and Code Optimization (TACO)     Hybrid Journal   (Followers: 9)
ACM Transactions on Autonomous and Adaptive Systems (TAAS)     Hybrid Journal   (Followers: 9)
ACM Transactions on Computation Theory (TOCT)     Hybrid Journal   (Followers: 12)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 3)
ACM Transactions on Computer Systems (TOCS)     Hybrid Journal   (Followers: 18)
ACM Transactions on Computer-Human Interaction     Hybrid Journal   (Followers: 16)
ACM Transactions on Computing Education (TOCE)     Hybrid Journal   (Followers: 7)
ACM Transactions on Design Automation of Electronic Systems (TODAES)     Hybrid Journal   (Followers: 6)
ACM Transactions on Economics and Computation     Hybrid Journal   (Followers: 2)
ACM Transactions on Embedded Computing Systems (TECS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Information Systems (TOIS)     Hybrid Journal   (Followers: 20)
ACM Transactions on Intelligent Systems and Technology (TIST)     Hybrid Journal   (Followers: 8)
ACM Transactions on Interactive Intelligent Systems (TiiS)     Hybrid Journal   (Followers: 5)
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Reconfigurable Technology and Systems (TRETS)     Hybrid Journal   (Followers: 6)
ACM Transactions on Sensor Networks (TOSN)     Hybrid Journal   (Followers: 8)
ACM Transactions on Speech and Language Processing (TSLP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Storage     Hybrid Journal  
ACS Applied Materials & Interfaces     Hybrid Journal   (Followers: 35)
Acta Automatica Sinica     Full-text available via subscription   (Followers: 2)
Acta Informatica Malaysia     Open Access  
Acta Universitatis Cibiniensis. Technical Series     Open Access  
Ad Hoc Networks     Hybrid Journal   (Followers: 11)
Adaptive Behavior     Hybrid Journal   (Followers: 10)
Advanced Engineering Materials     Hybrid Journal   (Followers: 29)
Advanced Science Letters     Full-text available via subscription   (Followers: 11)
Advances in Adaptive Data Analysis     Hybrid Journal   (Followers: 7)
Advances in Artificial Intelligence     Open Access   (Followers: 15)
Advances in Calculus of Variations     Hybrid Journal   (Followers: 6)
Advances in Catalysis     Full-text available via subscription   (Followers: 5)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 19)
Advances in Computer Engineering     Open Access   (Followers: 4)
Advances in Computer Science : an International Journal     Open Access   (Followers: 14)
Advances in Computing     Open Access   (Followers: 2)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 59)
Advances in Engineering Software     Hybrid Journal   (Followers: 28)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 14)
Advances in Human Factors/Ergonomics     Full-text available via subscription   (Followers: 23)
Advances in Human-Computer Interaction     Open Access   (Followers: 21)
Advances in Materials Science     Open Access   (Followers: 15)
Advances in Operations Research     Open Access   (Followers: 12)
Advances in Parallel Computing     Full-text available via subscription   (Followers: 7)
Advances in Porous Media     Full-text available via subscription   (Followers: 5)
Advances in Remote Sensing     Open Access   (Followers: 51)
Advances in Science and Research (ASR)     Open Access   (Followers: 6)
Advances in Technology Innovation     Open Access   (Followers: 6)
AEU - International Journal of Electronics and Communications     Hybrid Journal   (Followers: 8)
African Journal of Information and Communication     Open Access   (Followers: 9)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 4)
AI EDAM     Hybrid Journal   (Followers: 1)
Air, Soil & Water Research     Open Access   (Followers: 14)
AIS Transactions on Human-Computer Interaction     Open Access   (Followers: 7)
Algebras and Representation Theory     Hybrid Journal   (Followers: 1)
Algorithms     Open Access   (Followers: 11)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 5)
American Journal of Computational Mathematics     Open Access   (Followers: 4)
American Journal of Information Systems     Open Access   (Followers: 6)
American Journal of Sensor Technology     Open Access   (Followers: 4)
Anais da Academia Brasileira de Ciências     Open Access   (Followers: 2)
Analog Integrated Circuits and Signal Processing     Hybrid Journal   (Followers: 7)
Analysis in Theory and Applications     Hybrid Journal   (Followers: 1)
Animation Practice, Process & Production     Hybrid Journal   (Followers: 5)
Annals of Combinatorics     Hybrid Journal   (Followers: 4)
Annals of Data Science     Hybrid Journal   (Followers: 12)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 12)
Annals of Pure and Applied Logic     Open Access   (Followers: 3)
Annals of Software Engineering     Hybrid Journal   (Followers: 13)
Annals of West University of Timisoara - Mathematics and Computer Science     Open Access  
Annual Reviews in Control     Hybrid Journal   (Followers: 8)
Anuario Americanista Europeo     Open Access  
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 2)
Applied and Computational Harmonic Analysis     Full-text available via subscription   (Followers: 1)
Applied Artificial Intelligence: An International Journal     Hybrid Journal   (Followers: 12)
Applied Categorical Structures     Hybrid Journal   (Followers: 5)
Applied Clinical Informatics     Hybrid Journal   (Followers: 2)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 14)
Applied Computer Systems     Open Access   (Followers: 2)
Applied Informatics     Open Access  
Applied Mathematics and Computation     Hybrid Journal   (Followers: 33)
Applied Medical Informatics     Open Access   (Followers: 11)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 5)
Applied Soft Computing     Hybrid Journal   (Followers: 17)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 7)
Applied System Innovation     Open Access  
Architectural Theory Review     Hybrid Journal   (Followers: 3)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 6)
Archive of Numerical Software     Open Access  
Archives and Museum Informatics     Hybrid Journal   (Followers: 152)
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 6)
arq: Architectural Research Quarterly     Hybrid Journal   (Followers: 8)
Artifact     Open Access   (Followers: 2)
Artificial Life     Hybrid Journal   (Followers: 7)
Asia Pacific Journal on Computational Engineering     Open Access  
Asia-Pacific Journal of Information Technology and Multimedia     Open Access   (Followers: 1)
Asian Journal of Control     Hybrid Journal  
Assembly Automation     Hybrid Journal   (Followers: 2)
at - Automatisierungstechnik     Hybrid Journal   (Followers: 1)
Australian Educational Computing     Open Access   (Followers: 1)
Automatic Control and Computer Sciences     Hybrid Journal   (Followers: 6)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Automatica     Hybrid Journal   (Followers: 12)
Automation in Construction     Hybrid Journal   (Followers: 7)
Autonomous Mental Development, IEEE Transactions on     Hybrid Journal   (Followers: 8)
Balkan Journal of Electrical and Computer Engineering     Open Access  
Basin Research     Hybrid Journal   (Followers: 5)
Behaviour & Information Technology     Hybrid Journal   (Followers: 51)
Big Data and Cognitive Computing     Open Access   (Followers: 3)
Biodiversity Information Science and Standards     Open Access  
Bioinformatics     Hybrid Journal   (Followers: 323)
Biomedical Engineering     Hybrid Journal   (Followers: 16)
Biomedical Engineering and Computational Biology     Open Access   (Followers: 13)
Biomedical Engineering, IEEE Reviews in     Full-text available via subscription   (Followers: 19)
Biomedical Engineering, IEEE Transactions on     Hybrid Journal   (Followers: 35)
Briefings in Bioinformatics     Hybrid Journal   (Followers: 51)
British Journal of Educational Technology     Hybrid Journal   (Followers: 157)
Broadcasting, IEEE Transactions on     Hybrid Journal   (Followers: 12)
c't Magazin fuer Computertechnik     Full-text available via subscription   (Followers: 1)
CALCOLO     Hybrid Journal  
Calphad     Hybrid Journal   (Followers: 2)
Canadian Journal of Electrical and Computer Engineering     Full-text available via subscription   (Followers: 15)
Capturing Intelligence     Full-text available via subscription  
Catalysis in Industry     Hybrid Journal   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 2)
Cell Communication and Signaling     Open Access   (Followers: 2)
Central European Journal of Computer Science     Hybrid Journal   (Followers: 5)
CERN IdeaSquare Journal of Experimental Innovation     Open Access   (Followers: 3)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 3)
Chemometrics and Intelligent Laboratory Systems     Hybrid Journal   (Followers: 15)
ChemSusChem     Hybrid Journal   (Followers: 7)
China Communications     Full-text available via subscription   (Followers: 8)
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
CIN Computers Informatics Nursing     Hybrid Journal   (Followers: 11)
Circuits and Systems     Open Access   (Followers: 15)
Clean Air Journal     Full-text available via subscription   (Followers: 1)
CLEI Electronic Journal     Open Access  
Clin-Alert     Hybrid Journal   (Followers: 1)
Clinical eHealth     Open Access  
Cluster Computing     Hybrid Journal   (Followers: 2)
Cognitive Computation     Hybrid Journal   (Followers: 4)
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 4)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 14)
Communication Methods and Measures     Hybrid Journal   (Followers: 13)
Communication Theory     Hybrid Journal   (Followers: 24)
Communications Engineer     Hybrid Journal   (Followers: 1)
Communications in Algebra     Hybrid Journal   (Followers: 3)
Communications in Computational Physics     Full-text available via subscription   (Followers: 2)
Communications in Information Science and Management Engineering     Open Access   (Followers: 4)
Communications in Partial Differential Equations     Hybrid Journal   (Followers: 4)
Communications of the ACM     Full-text available via subscription   (Followers: 51)
Communications of the Association for Information Systems     Open Access   (Followers: 16)
COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering     Hybrid Journal   (Followers: 3)
Complex & Intelligent Systems     Open Access   (Followers: 1)
Complex Adaptive Systems Modeling     Open Access  
Complex Analysis and Operator Theory     Hybrid Journal   (Followers: 2)
Complexity     Hybrid Journal   (Followers: 6)
Complexus     Full-text available via subscription  
Composite Materials Series     Full-text available via subscription   (Followers: 8)
Computación y Sistemas     Open Access  
Computation     Open Access   (Followers: 1)
Computational and Applied Mathematics     Hybrid Journal   (Followers: 3)
Computational and Mathematical Biophysics     Open Access   (Followers: 1)
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 2)
Computational and Structural Biotechnology Journal     Open Access   (Followers: 1)
Computational and Theoretical Chemistry     Hybrid Journal   (Followers: 9)
Computational Astrophysics and Cosmology     Open Access   (Followers: 1)
Computational Biology and Chemistry     Hybrid Journal   (Followers: 12)
Computational Chemistry     Open Access   (Followers: 2)
Computational Cognitive Science     Open Access   (Followers: 2)
Computational Complexity     Hybrid Journal   (Followers: 4)
Computational Condensed Matter     Open Access   (Followers: 1)
Computational Ecology and Software     Open Access   (Followers: 9)
Computational Economics     Hybrid Journal   (Followers: 9)
Computational Geosciences     Hybrid Journal   (Followers: 17)
Computational Linguistics     Open Access   (Followers: 24)
Computational Management Science     Hybrid Journal  
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 8)
Computational Mechanics     Hybrid Journal   (Followers: 5)
Computational Methods and Function Theory     Hybrid Journal  
Computational Molecular Bioscience     Open Access   (Followers: 2)
Computational Optimization and Applications     Hybrid Journal   (Followers: 8)
Computational Particle Mechanics     Hybrid Journal   (Followers: 1)
Computational Research     Open Access   (Followers: 1)
Computational Science and Discovery     Full-text available via subscription   (Followers: 2)
Computational Science and Techniques     Open Access  
Computational Statistics     Hybrid Journal   (Followers: 14)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 35)
Computer     Full-text available via subscription   (Followers: 105)
Computer Aided Surgery     Open Access   (Followers: 6)
Computer Applications in Engineering Education     Hybrid Journal   (Followers: 8)
Computer Communications     Hybrid Journal   (Followers: 16)

        1 2 3 4 5 6 7 | Last

Journal Cover
Journal Prestige (SJR): 0.217
Citation Impact (citeScore): 1
Number of Followers: 11  

  This is an Open Access Journal Open Access journal
ISSN (Print) 1999-4893
Published by MDPI Homepage  [205 journals]
  • Algorithms, Vol. 12, Pages 28: FPGA Implementation of ECT Digital System
           for Imaging Conductive Materials

    • Authors: Wael Deabes
      First page: 28
      Abstract: This paper presents the hardware implementation of a stand-alone Electrical Capacitance Tomography (ECT) system employing a Field Programmable Gate Array (FPGA). The image reconstruction algorithms of the ECT system demand intensive computation and fast processing of large number of measurements. The inner product of large vectors is the core of the majority of these algorithms. Therefore, a reconfigurable segmented parallel inner product architecture for the parallel matrix multiplication is proposed. In addition, hardware-software codesign targeting FPGA System-On-Chip (SoC) is applied to achieve high performance. The development of the hardware-software codesign is carried out via commercial tools to adjust the software algorithms and parameters of the system. The ECT system is used in this work to monitor the characteristic of the molten metal in the Lost Foam Casting (LFC) process. The hardware system consists of capacitive sensors, wireless nodes and FPGA module. The experimental results reveal high stability and accuracy when building the ECT system based on the FPGA architecture. The proposed system achieves high performance in terms of speed and small design density.
      Citation: Algorithms
      PubDate: 2019-01-22
      DOI: 10.3390/a12020028
      Issue No: Vol. 12, No. 2 (2019)
  • Algorithms, Vol. 12, Pages 29: A Distributed Execution Pipeline for
           Clustering Trajectories Based on a Fuzzy Similarity Relation

    • Authors: Soufiane Maguerra, Azedine Boulmakoul, Lamia Karim, Hassan Badir
      First page: 29
      Abstract: The proliferation of indoor and outdoor tracking devices has led to a vast amount of spatial data. Each object can be described by several trajectories that, once analysed, can yield to significant knowledge. In particular, pattern analysis by clustering generic trajectories can give insight into objects sharing the same patterns. Still, sequential clustering approaches fail to handle large volumes of data. Hence, the necessity of distributed systems to be able to infer knowledge in a trivial time interval. In this paper, we detail an efficient, scalable and distributed execution pipeline for clustering raw trajectories. The clustering is achieved via a fuzzy similarity relation obtained by the transitive closure of a proximity relation. Moreover, the pipeline is integrated in Spark, implemented in Scala and leverages the Core and Graphx libraries making use of Resilient Distributed Datasets (RDD) and graph processing. Furthermore, a new simple, but very efficient, partitioning logic has been deployed in Spark and integrated into the execution process. The objective behind this logic is to equally distribute the load among all executors by considering the complexity of the data. In particular, resolving the load balancing issue has reduced the conventional execution time in an important manner. Evaluation and performance of the whole distributed process has been analysed by handling the Geolife project’s GPS trajectory dataset.
      Citation: Algorithms
      PubDate: 2019-01-22
      DOI: 10.3390/a12020029
      Issue No: Vol. 12, No. 2 (2019)
  • Algorithms, Vol. 12, Pages 30: An Exploration of a Balanced Up-Downwind
           Scheme for Solving Heston Volatility Model Equations on Variable Grids

    • Authors: Chong Sun, Qin Sheng
      First page: 30
      Abstract: This paper studies an effective finite difference scheme for solving two-dimensional Heston stochastic volatility option-pricing model problems. A dynamically balanced up-downwind strategy for approximating the cross-derivative is implemented and analyzed. Semi-discretized and spatially nonuniform platforms are utilized. The numerical method comprised is simple and straightforward, with reliable first order overall approximations. The spectral norm is used throughout the investigation, and numerical stability is proven. Simulation experiments are given to illustrate our results.
      Citation: Algorithms
      PubDate: 2019-01-22
      DOI: 10.3390/a12020030
      Issue No: Vol. 12, No. 2 (2019)
  • Algorithms, Vol. 12, Pages 31: Particle Probability Hypothesis Density
           Filter Based on Pairwise Markov Chains

    • Authors: Jiangyi Liu, Chunping Wang, Wei Wang, Zheng Li
      First page: 31
      Abstract: Most multi-target tracking filters assume that one target and its observation follow a Hidden Markov Chain (HMC) model, but the implicit independence assumption of the HMC model is invalid in many practical applications, and a Pairwise Markov Chain (PMC) model is more universally suitable than the traditional HMC model. A set of weighted particles is used to approximate the probability hypothesis density of multi-targets in the framework of the PMC model, and a particle probability hypothesis density filter based on the PMC model (PF-PMC-PHD) is proposed for the nonlinear multi-target tracking system. Simulation results show the effectiveness of the PF-PMC-PHD filter and that the tracking performance of the PF-PMC-PHD filter is superior to the particle PHD filter based on the HMC model in a scenario where we kept the local physical properties of nonlinear and Gaussian HMC models while relaxing their independence assumption.
      Citation: Algorithms
      PubDate: 2019-01-31
      DOI: 10.3390/a12020031
      Issue No: Vol. 12, No. 2 (2019)
  • Algorithms, Vol. 12, Pages 32: Fog-Computing-Based Heartbeat Detection and
           Arrhythmia Classification Using Machine Learning

    • Authors: Alessandro Scirè, Fabrizio Tropeano, Aris Anagnostopoulos, Ioannis Chatzigiannakis
      First page: 32
      Abstract: Designing advanced health monitoring systems is still an active research topic. Wearable and remote monitoring devices enable monitoring of physiological and clinical parameters (heart rate, respiration rate, temperature, etc.) and analysis using cloud-centric machine-learning applications and decision-support systems to predict critical clinical states. This paper moves from a totally cloud-centric concept to a more distributed one, by transferring sensor data processing and analysis tasks to the edges of the network. The resulting solution enables the analysis and interpretation of sensor-data traces within the wearable device to provide actionable alerts without any dependence on cloud services. In this paper, we use a supervised-learning approach to detect heartbeats and classify arrhythmias. The system uses a window-based feature definition that is suitable for execution within an asymmetric multicore embedded processor that provides a dedicated core for hardware assisted pattern matching. We evaluate the performance of the system in comparison with various existing approaches, in terms of achieved accuracy in the detection of abnormal events. The results show that the proposed embedded system achieves a high detection rate that in some cases matches the accuracy of the state-of-the-art algorithms executed in standard processors.
      Citation: Algorithms
      PubDate: 2019-02-02
      DOI: 10.3390/a12020032
      Issue No: Vol. 12, No. 2 (2019)
  • Algorithms, Vol. 12, Pages 33: Optimized Sonar Broadband Focused
           Beamforming Algorithm

    • Authors: Bi, Feng, Zhang
      First page: 33
      Abstract: Biases of initial direction estimation and focusing frequency selection affect the final focusing effect and may even cause algorithm failure in determining the focusing matrix in the coherent signal–subspace method. An optimized sonar broadband focused beamforming algorithm is proposed to address these defects. Initially, the robust Capon beamforming algorithm was used to correct the focusing matrix, and the broadband signals were then focused on the optimal focusing frequency by the corrected focusing matrix such that the wideband beamforming was transformed into a narrowband problem. Finally, the focused narrowband signals were beamformed by the second-order cone programming algorithm. Computer simulation results and water pool experiments verified that the proposed algorithm provides a good performance.
      Citation: Algorithms
      PubDate: 2019-02-05
      DOI: 10.3390/a12020033
      Issue No: Vol. 12, No. 2 (2019)
  • Algorithms, Vol. 12, Pages 34: From the Quantum Approximate Optimization
           Algorithm to a Quantum Alternating Operator Ansatz

    • Authors: Stuart Hadfield, Zhihui Wang, Bryan O'Gorman, Eleanor G. Rieffel, Davide Venturelli, Rupak Biswas
      First page: 34
      Abstract: The next few years will be exciting as prototype universal quantum processors emerge, enabling the implementation of a wider variety of algorithms. Of particular interest are quantum heuristics, which require experimentation on quantum hardware for their evaluation and which have the potential to significantly expand the breadth of applications for which quantum computers have an established advantage. A leading candidate is Farhi et al.’s quantum approximate optimization algorithm, which alternates between applying a cost function based Hamiltonian and a mixing Hamiltonian. Here, we extend this framework to allow alternation between more general families of operators. The essence of this extension, the quantum alternating operator ansatz, is the consideration of general parameterized families of unitaries rather than only those corresponding to the time evolution under a fixed local Hamiltonian for a time specified by the parameter. This ansatz supports the representation of a larger, and potentially more useful, set of states than the original formulation, with potential long-term impact on a broad array of application areas. For cases that call for mixing only within a desired subspace, refocusing on unitaries rather than Hamiltonians enables more efficiently implementable mixers than was possible in the original framework. Such mixers are particularly useful for optimization problems with hard constraints that must always be satisfied, defining a feasible subspace, and soft constraints whose violation we wish to minimize. More efficient implementation enables earlier experimental exploration of an alternating operator approach, in the spirit of the quantum approximate optimization algorithm, to a wide variety of approximate optimization, exact optimization, and sampling problems. In addition to introducing the quantum alternating operator ansatz, we lay out design criteria for mixing operators, detail mappings for eight problems, and provide a compendium with brief descriptions of mappings for a diverse array of problems.
      Citation: Algorithms
      PubDate: 2019-02-12
      DOI: 10.3390/a12020034
      Issue No: Vol. 12, No. 2 (2019)
  • Algorithms, Vol. 12, Pages 35: Research on Quantitative Investment
           Strategies Based on Deep Learning

    • Authors: Fang, Chen, Xue
      First page: 35
      Abstract: This paper takes 50 ETF options in the options market with high transaction complexity as the research goal. The Random Forest (RF) model, the Long Short-Term Memory network (LSTM) model, and the Support Vector Regression (SVR) model are used to predict 50 ETF price. Firstly, the original quantitative investment strategy is taken as the research object, and the 15 min trading frequency, which is more in line with the actual trading situation, is used, and then the Delta hedging concept of the options is introduced to control the risk of the quantitative investment strategy, to achieve the 15 min hedging strategy. Secondly, the final transaction price, buy price, highest price, lowest price, volume, historical volatility, and the implied volatility of the time segment marked with 50 ETF are the seven key factors affecting the price of 50 ETF. Then, two different types of LSTM-SVR models, LSTM-SVR I and LSTM-SVR II, are used to predict the final transaction price of the 50 ETF in the next time segment. In LSTM-SVR I model, the output of LSTM and seven key factors are combined as the input of SVR model. In LSTM-SVR II model, the hidden state vectors of LSTM and seven key factors are combined as the inputs of the SVR model. The results of the two LSTM-SVR models are compared with each other, and the better one is applied to the trading strategy. Finally, the benefit of the deep learning-based quantitative investment strategy, the resilience, and the maximum drawdown are used as indicators to judge the pros and cons of the research results. The accuracy and deviations of the LSTM-SVR prediction models are compared with those of the LSTM model and those of the RF model. The experimental results show that the quantitative investment strategy based on deep learning has higher returns than the traditional quantitative investment strategy, the yield curve is more stable, and the anti-fall performance is better.
      Citation: Algorithms
      PubDate: 2019-02-12
      DOI: 10.3390/a12020035
      Issue No: Vol. 12, No. 2 (2019)
  • Algorithms, Vol. 12, Pages 36: Conjugate Gradient Hard Thresholding
           Pursuit Algorithm for Sparse Signal Recovery

    • Authors: Yanfeng Zhang, Yunbao Huang, Haiyan Li, Pu Li, Xi’an Fan
      First page: 36
      Abstract: We propose a new iterative greedy algorithm to reconstruct sparse signals in Compressed Sensing. The algorithm, called Conjugate Gradient Hard Thresholding Pursuit (CGHTP), is a simple combination of Hard Thresholding Pursuit (HTP) and Conjugate Gradient Iterative Hard Thresholding (CGIHT). The conjugate gradient method with a fast asymptotic convergence rate is integrated into the HTP scheme that only uses simple line search, which accelerates the convergence of the iterative process. Moreover, an adaptive step size selection strategy, which constantly shrinks the step size until a convergence criterion is met, ensures that the algorithm has a stable and fast convergence rate without choosing step size. Finally, experiments on both Gaussian-signal and real-world images demonstrate the advantages of the proposed algorithm in convergence rate and reconstruction performance.
      Citation: Algorithms
      PubDate: 2019-02-13
      DOI: 10.3390/a12020036
      Issue No: Vol. 12, No. 2 (2019)
  • Algorithms, Vol. 12, Pages 12: Edge-Nodes Representation Neural Machine
           for Link Prediction

    • Authors: Guangluan Xu, Xiaoke Wang, Yang Wang, Daoyu Lin, Xian Sun, Kun Fu
      First page: 12
      Abstract: Link prediction is a task predicting whether there is a link between two nodes in a network. Traditional link prediction methods that assume handcrafted features (such as common neighbors) as the link’s formation mechanism are not universal. Other popular methods tend to learn the link’s representation, but they cannot represent the link fully. In this paper, we propose Edge-Nodes Representation Neural Machine (ENRNM), a novel method which can learn abundant topological features from the network as the link’s representation to promote the formation of the link. The ENRNM learns the link’s formation mechanism by combining the representation of edge and the representations of nodes on the two sides of the edge as link’s full representation. To predict the link’s existence, we train a fully connected neural network which can learn meaningful and abundant patterns. We prove that the features of edge and two nodes have the same importance in link’s formation. Comprehensive experiments are conducted on eight networks, experiment results demonstrate that the method ENRNM not only exceeds plenty of state-of-the-art link prediction methods but also performs very well on diverse networks with different structures and characteristics.
      Citation: Algorithms
      PubDate: 2019-01-02
      DOI: 10.3390/a12010012
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 13: Dissimilarity Space Based Multi-Source
           Cross-Project Defect Prediction

    • Authors: Ren, Zhang, Munir, Xia
      First page: 13
      Abstract: Software defect prediction is an important means to guarantee software quality. Because there are no sufficient historical data within a project to train the classifier, cross-project defect prediction (CPDP) has been recognized as a fundamental approach. However, traditional defect prediction methods use feature attributes to represent samples, which cannot avoid negative transferring, may result in poor performance model in CPDP. This paper proposes a multi-source cross-project defect prediction method based on dissimilarity space (DM-CPDP). This method not only retains the original information, but also obtains the relationship with other objects. So it can enhances the discriminant ability of the sample attributes to the class label. This method firstly uses the density-based clustering method to construct the prototype set with the cluster center of samples in the target set. Then, the arc-cosine kernel is used to calculate the sample dissimilarities between the prototype set and the source domain or the target set to form the dissimilarity space. In this space, the training set is obtained with the earth mover’s distance (EMD) method. For the unlabeled samples converted from the target set, the k-Nearest Neighbor (KNN) algorithm is used to label those samples. Finally, the model is learned from training data based on TrAdaBoost method and used to predict new potential defects. The experimental results show that this approach has better performance than other traditional CPDP methods.
      Citation: Algorithms
      PubDate: 2019-01-02
      DOI: 10.3390/a12010013
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 14: A Hybrid Proposed Fundus Image Enhancement
           Framework for Diabetic Retinopathy

    • Authors: Imran Qureshi, Jun Ma, Kashif Shaheed
      First page: 14
      Abstract: Diabetic retinopathy (DR) is a complication of diabetes and is known as visual impairment, and is diagnosed in various ethnicities of the working-age population worldwide. Fundus angiography is a widely applicable modality used by ophthalmologists and computerized applications to detect DR-based clinical features such as microaneurysms (MAs), hemorrhages (HEMs), and exudates (EXs) for early screening of DR. Fundus images are usually acquired using funduscopic cameras in varied light conditions and angles. Therefore, these images are prone to non-uniform illumination, poor contrast, transmission error, low brightness, and noise problems. This paper presents a novel and real-time mechanism of fundus image enhancement used for early grading of diabetic retinopathy, macular degeneration, retinal neoplasms, and choroid disruptions. The proposed system is based on two folds: (i) An RGB fundus image is initially taken and converted into a color appearance module (called lightness and denoted as J) of the CIECAM02 color space model to obtain image information in grayscale with bright light. Afterwards, in step (ii), the achieved J component is processed using a nonlinear contrast enhancement approach to improve the textural and color features of the fundus image without any further extraction steps. To test and evaluate the strength of the proposed technique, several performance and quality parameters—namely peak signal-to-noise ratio (PSNR), contrast-to-noise ratio (CNR), entropy (content information), histograms (intensity variation), and a structure similarity index measure (SSIM)—were applied to 1240 fundus images comprised of two publicly available datasets, DRIVE and MESSIDOR. It was determined from the experiments that the proposed enhancement procedure outperformed histogram-based approaches in terms of contrast, sharpness of fundus features, and brightness. This further revealed that it can be a suitable preprocessing tool for segmentation and classification of DR-related features algorithms.
      Citation: Algorithms
      PubDate: 2019-01-04
      DOI: 10.3390/a12010014
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 15: Total Optimization of Energy Networks in a
           Smart City by Multi-Population Global-Best Modified Brain Storm
           Optimization with Migration

    • Authors: Sato, Fukuyama, Iizaka, Matsui
      First page: 15
      Abstract: This paper proposes total optimization of energy networks in a smart city by multi-population global-best modified brain storm optimization (MP-GMBSO). Efficient utilization of energy is necessary for reduction of CO2 emission, and smart city demonstration projects have been conducted around the world in order to reduce total energies and the amount of CO2 emission. The problem can be formulated as a mixed integer nonlinear programming (MINLP) problem and various evolutionary computation techniques such as particle swarm optimization (PSO), differential evolution (DE), Differential Evolutionary Particle Swarm Optimization (DEEPSO), Brain Storm Optimization (BSO), Modified BSO (MBSO), Global-best BSO (BSO), and Global-best Modified Brain Storm Optimization (GMBSO) have been applied to the problem. However, there is still room for improving solution quality. Multi-population based evolutionary computation methods have been verified to improve solution quality and the approach has a possibility for improving solution quality. The proposed MS-GMBSO utilizes only migration for multi-population models instead of abest, which is the best individual among all of sub-populations so far, and both migration and abest. Various multi-population models, migration topologies, migration policies, and the number of sub-populations are also investigated. It is verified that the proposed MP-GMBSO based method with ring topology, the W-B policy, and 320 individuals is the most effective among all of multi-population parameters.
      Citation: Algorithms
      PubDate: 2019-01-07
      DOI: 10.3390/a12010015
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 16: Learning an Efficient Convolution Neural
           Network for Pansharpening

    • Authors: Yecai Guo, Fei Ye, Hao Gong
      First page: 16
      Abstract: Pansharpening is a domain-specific task of satellite imagery processing, which aims at fusing a multispectral image with a corresponding panchromatic one to enhance the spatial resolution of multispectral image. Most existing traditional methods fuse multispectral and panchromatic images in linear manners, which greatly restrict the fusion accuracy. In this paper, we propose a highly efficient inference network to cope with pansharpening, which breaks the linear limitation of traditional methods. In the network, we adopt a dilated multilevel block coupled with a skip connection to perform local and overall compensation. By using dilated multilevel block, the proposed model can make full use of the extracted features and enlarge the receptive field without introducing extra computational burden. Experiment results reveal that our network tends to induce competitive even superior pansharpening performance compared with deeper models. As our network is shallow and trained with several techniques to prevent overfitting, our model is robust to the inconsistencies across different satellites.
      Citation: Algorithms
      PubDate: 2019-01-08
      DOI: 10.3390/a12010016
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 17: Shadowed Type-2 Fuzzy Systems for Dynamic
           Parameter Adaptation in Harmony Search and Differential Evolution

    • Authors: Oscar Castillo, Patricia Melin, Fevrier Valdez, Jose Soria, Emanuel Ontiveros-Robles, Cinthia Peraza, Patricia Ochoa
      First page: 17
      Abstract: Nowadays, dynamic parameter adaptation has been shown to provide a significant improvement in several metaheuristic optimization methods, and one of the main ways to realize this dynamic adaptation is the implementation of Fuzzy Inference Systems. The main reason for this is because Fuzzy Inference Systems can be designed based on human knowledge, and this can provide an intelligent dynamic adaptation of parameters in metaheuristics. In addition, with the coming forth of Type-2 Fuzzy Logic, the capability of uncertainty handling offers an attractive improvement for dynamic parameter adaptation in metaheuristic methods, and, in fact, the use of Interval Type-2 Fuzzy Inference Systems (IT2 FIS) has been shown to provide better results with respect to Type-1 Fuzzy Inference Systems (T1 FIS) in recent works. Based on the performance improvement exhibited by IT2 FIS, the present paper aims to implement the Shadowed Type-2 Fuzzy Inference System (ST2 FIS) for further improvements in dynamic parameter adaptation in Harmony Search and Differential Evolution optimization methods. The ST2 FIS is an approximation of General Type-2 Fuzzy Inference Systems (GT2 FIS), and is based on the principles of Shadowed Fuzzy Sets. The main reason for using ST2 FIS and not GT2 FIS is because the computational cost of GT2 FIS represents a time limitation in this application. The paper presents a comparison of the conventional methods with static parameters and the dynamic parameter adaptation based on ST2 FIS, and the approaches are compared in solving mathematical functions and in controller optimization.
      Citation: Algorithms
      PubDate: 2019-01-09
      DOI: 10.3390/a12010017
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 18: A Novel Hybrid Ant Colony Optimization for
           a Multicast Routing Problem

    • Authors: Xiaoxia Zhang, Xin Shen, Ziqiao Yu
      First page: 18
      Abstract: Quality of service multicast routing is an important research topic in networks. Research has sought to obtain a multicast routing tree at the lowest cost that satisfies bandwidth, delay and delay jitter constraints. Due to its non-deterministic polynomial complete problem, many meta-heuristic algorithms have been adopted to solve this kind of problem. The paper presents a new hybrid algorithm, namely ACO&CM, to solve the problem. The primary innovative point is to combine the solution generation process of ant colony optimization (ACO) algorithm with the Cloud model (CM). Moreover, within the framework structure of the ACO, we embed the cloud model in the ACO algorithm to enhance the performance of the ACO algorithm by adjusting the pheromone trail on the edges. Although a high pheromone trail intensity on some edges may trap into local optimum, the pheromone updating strategy based on the CM is used to search for high-quality areas. In order to avoid the possibility of loop formation, we devise a memory detection search (MDS) strategy, and integrate it into the path construction process. Finally, computational results demonstrate that the hybrid algorithm has advantages of an efficient and excellent performance for the solution quality.
      Citation: Algorithms
      PubDate: 2019-01-10
      DOI: 10.3390/a12010018
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 19: Algorithm for Producing Rankings Based on
           Expert Surveys

    • Authors: Indra Overland, Javlon Juraev
      First page: 19
      Abstract: This paper develops an automated algorithm to process input data for segmented string relative rankings (SSRRs). The purpose of the SSRR methodology is to create rankings of countries, companies, or any other units based on surveys of expert opinion. This is done without the use of grading systems, which can distort the results due to varying degrees of strictness among experts. However, the original SSRR approach relies on manual application, which is highly laborious and also carries a risk of human error. This paper seeks to solve this problem by further developing the SSRR approach by employing link analysis, which is based on network theory and is similar to the PageRank algorithm used by the Google search engine. The ranking data are treated as part of a linear, hierarchical network and each unit receives a score according to how many units are positioned below it in the network. This approach makes it possible to efficiently resolve contradictions among experts providing input for a ranking. A hypertext preprocessor (PHP) script for the algorithm is included in the article’s appendix. The proposed methodology is suitable for use across a range of social science disciplines, especially economics, sociology, and political science.
      Citation: Algorithms
      PubDate: 2019-01-10
      DOI: 10.3390/a12010019
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 20: Robust Guaranteed-Cost Preview Repetitive
           Control for Polytopic Uncertain Discrete-Time Systems

    • Authors: Yong-Hong Lan, Jun-Jun Xia, Yue-Xiang Shi
      First page: 20
      Abstract: In this paper, a robust guaranteed-cost preview repetitive controller is proposed for a class of polytopic uncertain discrete-time systems. In order to improve the tracking performance, a repetitive controller, combined with preview compensator, is inserted in the forward channel. By using the L-order forward difference operator, an augmented dynamic system is constructed. Then, the guaranteed-cost preview repetitive control problem is transformed into a guaranteed-cost control problem for the augmented dynamic system. For a given performance index, the sufficient condition of asymptotic stability for the closed-loop system is derived by using a parameter-dependent Lyapunov function method and linear matrix inequality (LMI) techniques. Incorporating the controller obtained into the original system, the guaranteed-cost preview repetitive controller is derived. A numerical example is also included, to show the effectiveness of the proposed method.
      Citation: Algorithms
      PubDate: 2019-01-10
      DOI: 10.3390/a12010020
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 21: Acknowledgement to Reviewers of Algorithms
           in 2018

    • Authors: Algorithms Editorial Office
      First page: 21
      Abstract: Rigorous peer-review is the corner-stone of high-quality academic publishing [...]
      Citation: Algorithms
      PubDate: 2019-01-10
      DOI: 10.3390/a12010021
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 22: Gyro Error Compensation in Optoelectronic
           Platform Based on a Hybrid ARIMA-Elman Model

    • Authors: Xingkui Xu, Chunfeng Wu, Qingyu Hou, Zhigang Fan
      First page: 22
      Abstract: As an important angle sensor of the opto-electric platform, gyro output accuracy plays a vital role in the stabilization and track accuracy of the whole system. It is known that the generally used fixed-bandwidth filters, single neural network models, or linear models cannot compensate for gyro error well, and so they cannot meet engineering needs satisfactorily. In this paper, a novel hybrid ARIMA-Elman model is proposed. For the reason that it can fully combine the strong linear approximation capability of the ARIMA model and the superior nonlinear compensation capability of a neural network, the proposed model is suitable for handling gyro error, especially for its non-stationary random component. Then, to solve the problem that the parameters of ARIMA model and the initial weights of the Elman neural network are difficult to determine, a differential algorithm is initially utilized for parameter selection. Compared with other commonly used optimization algorithms (e.g., the traditional least-squares identification method and the genetic algorithm method), the intelligence differential algorithm can overcome the shortcomings of premature convergence and has higher optimization speed and accuracy. In addition, the drift error is obtained based on the technique of lift-wavelet separation and reconstruction, and, in order to weaken the randomness of the data sequence, an ashing operation and Jarque-Bear test have been added to the handle process. In this study, actual gyro data is collected and the experimental results show that the proposed method has higher compensation accuracy and faster network convergence, when compared with other commonly used error-compensation methods. Finally, the hybrid method is used to compensate for gyro error collected in other states. The test results illustrate that the proposed algorithm can effectively improve error compensation accuracy, and has good generalization performance.
      Citation: Algorithms
      PubDate: 2019-01-11
      DOI: 10.3390/a12010022
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 23: On Finding and Enumerating Maximal and
           Maximum k-Partite Cliques in k-Partite Graphs

    • Authors: Charles A. Phillips, Kai Wang, Erich J. Baker, Jason A. Bubier, Elissa J. Chesler, Michael A. Langston
      First page: 23
      Abstract: Let k denote an integer greater than 2, let G denote a k-partite graph, and let S denote the set of all maximal k-partite cliques in G. Several open questions concerning the computation of S are resolved. A straightforward and highly-scalable modification to the classic recursive backtracking approach of Bron and Kerbosch is first described and shown to run in O(3n/3) time. A series of novel graph constructions is then used to prove that this bound is best possible in the sense that it matches an asymptotically tight upper limit on S . The task of identifying a vertex-maximum element of S is also considered and, in contrast with the k = 2 case, shown to be NP-hard for every k ≥ 3. A special class of k-partite graphs that arises in the context of functional genomics and other problem domains is studied as well and shown to be more readily solvable via a polynomial-time transformation to bipartite graphs. Applications, limitations, potentials for faster methods, heuristic approaches, and alternate formulations are also addressed.
      Citation: Algorithms
      PubDate: 2019-01-15
      DOI: 10.3390/a12010023
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 24: A Pricing Strategy of E-Commerce
           Advertising Cooperation in the Stackelberg Game Model with Different
           Market Power Structure

    • Authors: Ling Zhu, Jie Lin
      First page: 24
      Abstract: A lot of research work has studied the auction mechanism of uncertain advertising cooperation between the e-commerce platform and advertisers, but little has focused on pricing strategy in stable advertising cooperation under a certain market power structure. To fill this gap, this paper makes a study of the deep interest distribution of two parties in such cooperation. We propose a pricing strategy by building two stackelberg master-slave models when the e-commerce platform and the advertiser are respectively the leader in the cooperation. It is analyzed that the optimization solution of the profits of both parties and the total system are affected by some main decision factors including the income commission proportion, the advertising product price and the cost of advertising effort of both parties’ brand in different dominant models. Then, some numerical studies are used to verify the effectiveness of the models. Finally, we draw a conclusion and make some suggestions to the platforms and the advertisers in the e-commerce advertising cooperation.
      Citation: Algorithms
      PubDate: 2019-01-18
      DOI: 10.3390/a12010024
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 25: Power Allocation Algorithm for an
           Energy-Harvesting Wireless Transmission System Considering Energy Losses

    • Authors: Su Zhao, Gang Huang, Qi Zhu
      First page: 25
      Abstract: For an energy-harvesting wireless transmission system, considering that a transmitter which can harvest energy from nature has two kinds of extra energy consumption, circuit consumption and storage losses, the optimization models are set up in this paper for the purpose of maximizing the average throughput of the system within a certain period of time for both a time-invariant channel and time-varying channel. Convex optimization methods such as the Lagrange multiplier method and the KKT (Karush–Kuhn–Tucker) condition are used to solve the optimization problem; then, an optimal offline power allocation algorithm which has a three-threshold structure is proposed. In the three-threshold algorithm, two thresholds can be achieved by using a linear search method while the third threshold is calculated according to the channel state information and energy losses; then, the offline power allocation is based on the three thresholds and energy arrivals. Furthermore, inspired by the optimal offline algorithm, a low-complexity online algorithm with adaptive thresholds is derived. Finally, the simulation results show that the offline power allocation algorithms proposed in this paper are better than other algorithms, the performance of the online algorithm proposed is close to the offline one, and these algorithms can help improve the average throughput of the system.
      Citation: Algorithms
      PubDate: 2019-01-18
      DOI: 10.3390/a12010025
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 26: Ensemble and Deep Learning for
           Language-Independent Automatic Selection of Parallel Data

    • Authors: Despoina Mouratidis, Katia Lida Kermanidis
      First page: 26
      Abstract: Machine translation is used in many applications in everyday life. Due to the increase of translated documents that need to be organized as useful or not (for building a translation model), the automated categorization of texts (classification), is a popular research field of machine learning. This kind of information can be quite helpful for machine translation. Our parallel corpora (English-Greek and English-Italian) are based on educational data, which are quite difficult to translate. We apply two state of the art architectures, Random Forest (RF) and Deeplearnig4j (DL4J), to our data (which constitute three translation outputs). To our knowledge, this is the first time that deep learning architectures are applied to the automatic selection of parallel data. We also propose new string-based features that seem to be effective for the classifier, and we investigate whether an attribute selection method could be used for better classification accuracy. Experimental results indicate an increase of up to 4% (compared to our previous work) using RF and rather satisfactory results using DL4J.
      Citation: Algorithms
      PubDate: 2019-01-18
      DOI: 10.3390/a12010026
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 27: Data Analysis, Simulation and Visualization
           for Environmentally Safe Maritime Data

    • Authors: Manolis Maragoudakis
      First page: 27
      Abstract: Marine transportation in Aegean Sea, a part of the Mediterranean Sea that serves as gateway between three continents has recently seen a significant increase. Despite the commercial benefits to the region, there are certain issues related to the preservation of the local ecosystem and safety. This danger is further deteriorated by the absence of regulations on allowed waterways. Marine accidents could cause a major ecological disaster in the area and pose big socio-economic impacts in Greece. Monitoring marine traffic data is of major importance and one of the primary goals of the current research. Real-time monitoring and alerting can be extremely useful to local authorities, companies, NGO’s and the public in general. Apart from real-time applications, the knowledge discovery from historical data is also significant. Towards this direction, a data analysis and simulation framework for maritime data has been designed and developed. The framework analyzes historical data about ships and area conditions, of varying time and space granularity, measures critical parameters that could influence the levels of hazard in certain regions and clusters such data according to their similarity. Upon this unsupervised step, the degree of hazard is estimated and along with other important parameters is fed into a special type of Bayesian network, in order to infer on future situations, thus, simulating future data based on past conditions. Another innovative aspect of this work is the modeling of shipping traffic as a social network, whose analysis could provide useful and informative visualizations. The use of such a system is particularly beneficial for multiple stakeholders, such as the port authorities, the ministry of Mercantile Marine, etc. mainly due to the fact that specific policy options can be evaluated and re-designed based on feedback from our framework.
      Citation: Algorithms
      PubDate: 2019-01-21
      DOI: 10.3390/a12010027
      Issue No: Vol. 12, No. 1 (2019)
  • Algorithms, Vol. 12, Pages 1: The Effects of Geopolitical Uncertainty in
           Forecasting Financial Markets: A Machine Learning Approach

    • Authors: Vasilios Plakandaras, Periklis Gogas, Theophilos Papadimitriou
      First page: 1
      Abstract: An important ingredient in economic policy planning both in the public or the private sector is risk management. In economics and finance, risk manifests through many forms and it is subject to the sector that it entails (financial, fiscal, international, etc.). An under-investigated form is the risk stemming from geopolitical events, such as wars, political tensions, and conflicts. In contrast, the effects of terrorist acts have been thoroughly examined in the relevant literature. In this paper, we examine the potential ability of geopolitical risk of 14 emerging countries to forecast several assets: oil prices, exchange rates, national stock indices, and the price of gold. In doing so, we build forecasting models that are based on machine learning techniques and evaluate the associated out-of-sample forecasting error in various horizons from one to twenty-four months ahead. Our empirical findings suggest that geopolitical events in emerging countries are of little importance to the global economy, since their effect on the assets examined is mainly transitory and only of regional importance. In contrast, gold prices seem to be affected by fluctuation in geopolitical risk. This finding may be justified by the nature of investments in gold, in that they are typically used by economic agents to hedge risk.
      Citation: Algorithms
      PubDate: 2018-12-20
      DOI: 10.3390/a12010001
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 2: Steady-State Performance of an Adaptive
           Combined MISO Filter Using the Multichannel Affine Projection Algorithm

    • Authors: Danilo Comminiello, Michele Scarpiniti, Luis A. Azpicueta-Ruiz, Aurelio Uncini
      First page: 2
      Abstract: The combination of adaptive filters is an effective approach to improve filtering performance. In this paper, we investigate the performance of an adaptive combined scheme between two adaptive multiple-input single-output (MISO) filters, which can be easily extended to the case of multiple outputs. In order to generalize the analysis, we consider the multichannel affine projection algorithm (APA) to update the coefficients of the MISO filters, which increases the possibility of exploiting the capabilities of the filtering scheme. Using energy conservation relations, we derive a theoretical behavior of the proposed adaptive combination scheme at steady state. Such analysis entails some further theoretical insights with respect to the single channel combination scheme. Simulation results prove both the validity of the theoretical steady-state analysis and the effectiveness of the proposed combined scheme.
      Citation: Algorithms
      PubDate: 2018-12-20
      DOI: 10.3390/a12010002
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 3: Adaptive Operator Quantum-Behaved
           Pigeon-Inspired Optimization Algorithm with Application to UAV Path

    • Authors: Chunhe Hu, Yu Xia, Junguo Zhang
      First page: 3
      Abstract: Path planning of unmanned aerial vehicles (UAVs) in threatening and adversarial areas is a constrained nonlinear optimal problem which takes a great amount of static and dynamic constraints into account. Quantum-behaved pigeon-inspired optimization (QPIO) has been widely applied to such nonlinear problems. However, conventional QPIO is suffering low global convergence speed and local optimum. In order to solve the above problems, an improved QPIO algorithm, adaptive operator QPIO, is proposed in this paper. Firstly, a new initialization process based on logistic mapping method is introduced to generate the initial population of the pigeon-swarm. After that, to improve the performance of the map and compass operation, the factor parameter will be adaptively updated in each iteration, which can balance the ability between global and local search. In the final landmark operation, the gradual decreasing pigeon population-updating strategy is introduced to prevent premature convergence and local optimum. Finally, the demonstration of the proposed algorithm on UAV path planning problem is presented, and the comparison result indicates that the performance of our algorithm is better than that of particle swarm optimization (PSO), pigeon-inspired optimization (PIO), and its variants, in terms of convergence and accuracy.
      Citation: Algorithms
      PubDate: 2018-12-21
      DOI: 10.3390/a12010003
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 4: On Fast Converging Data-Selective Adaptive

    • Authors: Marcele O. K. Mendonça, Jonathas O. Ferreira, Christos G. Tsinos, Paulo S R Diniz, Tadeu N. Ferreira
      First page: 4
      Abstract: The amount of information currently generated in the world has been increasing exponentially, raising the question of whether all acquired data is relevant for the learning algorithm process. If a subset of the data does not bring enough innovation, data-selection strategies can be employed to reduce the computational complexity cost and, in many cases, improve the estimation accuracy. In this paper, we explore some adaptive filtering algorithms whose characteristic features are their fast convergence and data selection. These algorithms incorporate a prescribed data-selection strategy and are compared in distinct applications environments. The simulation results include both synthetic and real data.
      Citation: Algorithms
      PubDate: 2018-12-21
      DOI: 10.3390/a12010004
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 5: MAPSkew: Metaheuristic Approaches for
           Partitioning Skew in MapReduce

    • Authors: Matheus H. M. Pericini, Lucas G. M. Leite, Francisco H. de Carvalho-Junior, Javam C. Machado, Cenez A. Rezende
      First page: 5
      Abstract: MapReduce is a parallel computing model in which a large dataset is split into smaller parts and executed on multiple machines. Due to its simplicity, MapReduce has been widely used in various applications domains. MapReduce can significantly reduce the processing time of a large amount of data by dividing the dataset into smaller parts and processing them in parallel in multiple machines. However, when data are not uniformly distributed, we have the so called partitioning skew, where the allocation of tasks to machines becomes unbalanced, either by the distribution function splitting the dataset unevenly or because a part of the data is more complex and requires greater computational effort. To solve this problem, we propose an approach based on metaheuristics. For evaluating purposes, three metaheuristics were implemented: Simulated Annealing, Local Beam Search and Stochastic Beam Search. Our experimental evaluation, using a MapReduce implementation of the Bron-Kerbosch Clique Algorithm, shows that the proposed method can find good partitionings while better balancing data among machines.
      Citation: Algorithms
      PubDate: 2018-12-24
      DOI: 10.3390/a12010005
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 6: Extraction and Detection of Surface Defects
           in Particleboards by Tracking Moving Targets

    • Authors: Chengcheng Wang, Yaqiu Liu, Peiyu Wang
      First page: 6
      Abstract: Considering the linear motion of particleboards in the production line, the detection of surface defects in particleboards is a major challenge. In this paper, a method based on moving target tracking is proposed for the detection of surface defects in particleboards. To achieve this, the kernel correlation filter (KCF) target tracking algorithm was modified with the median flow algorithm and used to capture the moving targets of surface defects. The defect images were extracted by a Sobel operator, and the defect number, the defect area, and the degree of damage were calculated. The level of surface defect in particleboards was evaluated by fuzzy pattern recognition. Experiments were then carried out to prove the effectiveness and accuracy of the proposed method.
      Citation: Algorithms
      PubDate: 2018-12-24
      DOI: 10.3390/a12010006
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 7: Salt and Pepper Noise Removal with
           Multi-Class Dictionary Learning and L0 Norm Regularizations

    • Authors: Di Guo, Zhangren Tu, Jiechao Wang, Min Xiao, Xiaofeng Du, Xiaobo Qu
      First page: 7
      Abstract: Images may be corrupted by salt and pepper impulse noise during image acquisitions or transmissions. Although promising denoising performances have been recently obtained with sparse representations, how to restore high-quality images remains challenging and open. In this work, image sparsity is enhanced with a fast multiclass dictionary learning, and then both the sparsity regularization and robust data fidelity are formulated as minimizations of L0-L0 norms for salt and pepper impulse noise removal. Additionally, a numerical algorithm of modified alternating direction minimization is derived to solve the proposed denoising model. Experimental results demonstrate that the proposed method outperforms the compared state-of-the-art ones on preserving image details and achieving higher objective evaluation criteria.
      Citation: Algorithms
      PubDate: 2018-12-25
      DOI: 10.3390/a12010007
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 8: A Robust Visual Tracking Algorithm Based on
           Spatial-Temporal Context Hierarchical Response Fusion

    • Authors: Wancheng Zhang, Yanmin Luo, Zhi Chen, Yongzhao Du, Daxin Zhu, Peizhong Liu
      First page: 8
      Abstract: Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual object tracking. However, visual tracking is still challenging when the target objects undergo complex scenarios such as occlusion, deformation, scale changes and illumination changes. In this paper, we utilize the hierarchical features of convolutional neural networks (CNNs) and learn a spatial-temporal context correlation filter on convolutional layers. Then, the translation is estimated by fusing the response score of the filters on the three convolutional layers. In terms of scale estimation, we learn a discriminative correlation filter to estimate scale from the best confidence results. Furthermore, we proposed a re-detection activation discrimination method to improve the robustness of visual tracking in the case of tracking failure and an adaptive model update method to reduce tracking drift caused by noisy updates. We evaluate the proposed tracker with DCFs and deep features on OTB benchmark datasets. The tracking results demonstrated that the proposed algorithm is superior to several state-of-the-art DCF methods in terms of accuracy and robustness.
      Citation: Algorithms
      PubDate: 2018-12-26
      DOI: 10.3390/a12010008
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 9: Comparative Study in Fuzzy Controller
           Optimization Using Bee Colony, Differential Evolution, and Harmony Search

    • Authors: Oscar Castillo, Fevrier Valdez, José Soria, Leticia Amador-Angulo, Patricia Ochoa, Cinthia Peraza
      First page: 9
      Abstract: This paper presents a comparison among the bee colony optimization (BCO), differential evolution (DE), and harmony search (HS) algorithms. In addition, for each algorithm, a type-1 fuzzy logic system (T1FLS) for the dynamic modification of the main parameters is presented. The dynamic adjustment in the main parameters for each algorithm with the implementation of fuzzy systems aims at enhancing the performance of the corresponding algorithms. Each algorithm (modified and original versions) is analyzed and compared based on the optimal design of fuzzy systems for benchmark control problems, especially in fuzzy controller design. Simulation results provide evidence that the FDE algorithm outperforms the results of the FBCO and FHS algorithms in the optimization of fuzzy controllers. Statistically is demonstrated that the better errors are found with the implementation of the fuzzy systems to enhance each proposed algorithm.
      Citation: Algorithms
      PubDate: 2018-12-27
      DOI: 10.3390/a12010009
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 10: Diagonally Implicit Runge–Kutta Type
           Method for Directly Solving Special Fourth-Order Ordinary Differential
           Equations with Ill-Posed Problem of a Beam on Elastic Foundation

    • Authors: Nizam Ghawadri, Norazak Senu, Firas Adel Fawzi, Fudziah Ismail, Zarina Bibi Ibrahim
      First page: 10
      Abstract: In this study, fifth-order and sixth-order diagonally implicit Runge–Kutta type (DIRKT) techniques for solving fourth-order ordinary differential equations (ODEs) are derived which are denoted as DIRKT5 and DIRKT6, respectively. The first method has three and the another one has four identical nonzero diagonal elements. A set of test problems are applied to validate the methods and numerical results showed that the proposed methods are more efficient in terms of accuracy and number of function evaluations compared to the existing implicit Runge–Kutta (RK) methods.
      Citation: Algorithms
      PubDate: 2018-12-29
      DOI: 10.3390/a12010010
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 12, Pages 11: Facial Expression Recognition Based on
           Discrete Separable Shearlet Transform and Feature Selection

    • Authors: Yang Lu, Shigang Wang, Wenting Zhao
      First page: 11
      Abstract: In this paper, a novel approach to facial expression recognition based on the discrete separable shearlet transform (DSST) and normalized mutual information feature selection is proposed. The approach can be divided into five steps. First, all test and training images are preprocessed. Second, DSST is applied to the preprocessed facial expression images, and all the transformation coefficients are obtained as the original feature set. Third, an improved normalized mutual information feature selection is proposed to find the optimal feature subset of the original feature set, thus we can retain the key classification information of the original data. Fourth, the feature extraction and selection of the feature space is reduced by employing linear discriminant analysis. Finally, a support vector machine is used to recognize the expressions. In this study, experimental verification was carried out on four open facial expression databases. The results show that this method can not only improve the recognition rate of facial expressions, but also significantly reduce the computational complexity and improve the system efficiency.
      Citation: Algorithms
      PubDate: 2018-12-31
      DOI: 10.3390/a12010011
      Issue No: Vol. 12, No. 1 (2018)
  • Algorithms, Vol. 11, Pages 190: Best Trade-Off Point Method for Efficient
           Resource Provisioning in Spark

    • Authors: Peter P. Nghiem
      First page: 190
      Abstract: Considering the recent exponential growth in the amount of information processed in Big Data, the high energy consumed by data processing engines in datacenters has become a major issue, underlining the need for efficient resource allocation for more energy-efficient computing. We previously proposed the Best Trade-off Point (BToP) method, which provides a general approach and techniques based on an algorithm with mathematical formulas to find the best trade-off point on an elbow curve of performance vs. resources for efficient resource provisioning in Hadoop MapReduce. The BToP method is expected to work for any application or system which relies on a trade-off elbow curve, non-inverted or inverted, for making good decisions. In this paper, we apply the BToP method to the emerging cluster computing framework, Apache Spark, and show that its performance and energy consumption are better than Spark with its built-in dynamic resource allocation enabled. Our Spark-Bench tests confirm the effectiveness of using the BToP method with Spark to determine the optimal number of executors for any workload in production environments where job profiling for behavioral replication will lead to the most efficient resource provisioning.
      Citation: Algorithms
      PubDate: 2018-11-22
      DOI: 10.3390/a11120190
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 191: MapReduce Algorithm for Location
           Recommendation by Using Area Skyline Query

    • Authors: Chen Li, Annisa Annisa, Asif Zaman, Mahboob Qaosar, Saleh Ahmed, Yasuhiko Morimoto
      First page: 191
      Abstract: Location recommendation is essential for various map-based mobile applications. However, it is not easy to generate location-based recommendations with the changing contexts and locations of mobile users. Skyline operation is one of the most well-established techniques for location-based services. Our previous work proposed a new query method, called “area skyline query”, to select areas in a map. However, it is not efficient for large-scale data. In this paper, we propose a parallel algorithm for processing the area skyline using MapReduce. Intensive experiments on both synthetic and real data confirm that our proposed algorithm is sufficiently efficient for large-scale data.
      Citation: Algorithms
      PubDate: 2018-11-25
      DOI: 10.3390/a11120191
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 192: A Study on Faster R-CNN-Based Subway
           Pedestrian Detection with ACE Enhancement

    • Authors: Hongquan Qu, Meihan Wang, Changnian Zhang, Yun Wei
      First page: 192
      Abstract: At present, the problem of pedestrian detection has attracted increasing attention in the field of computer vision. The faster regions with convolutional neural network features (Faster R-CNN) are regarded as one of the most important techniques for studying this problem. However, the detection capability of the model trained by faster R-CNN is susceptible to the diversity of pedestrians’ appearance and the light intensity in specific scenarios, such as in a subway, which can lead to the decline in recognition rate and the offset of target selection for pedestrians. In this paper, we propose the modified faster R-CNN method with automatic color enhancement (ACE), which can improve sample contrast by calculating the relative light and dark relationship to correct the final pixel value. In addition, a calibration method based on sample categories reduction is presented to accurately locate the target for detection. Then, we choose the faster R-CNN target detection framework on the experimental dataset. Finally, the effectiveness of this method is verified with the actual data sample collected from the subway passenger monitoring video.
      Citation: Algorithms
      PubDate: 2018-11-26
      DOI: 10.3390/a11120192
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 193: A Forecast Model of the Number of
           Containers for Containership Voyage

    • Authors: Yuchuang Wang, Guoyou Shi, Xiaotong Sun
      First page: 193
      Abstract: Container ships must pass through multiple ports of call during a voyage. Therefore, forecasting container volume information at the port of origin followed by sending such information to subsequent ports is crucial for container terminal management and container stowage personnel. Numerous factors influence container allocation to container ships for a voyage, and the degree of influence varies, engendering a complex nonlinearity. Therefore, this paper proposes a model based on gray relational analysis (GRA) and mixed kernel support vector machine (SVM) for predicting container allocation to a container ship for a voyage. First, in this model, the weights of influencing factors are determined through GRA. Then, the weighted factors serve as the input of the SVM model, and SVM model parameters are optimized through a genetic algorithm. Numerical simulations revealed that the proposed model could effectively predict the number of containers for container ship voyage and that it exhibited strong generalization ability and high accuracy. Accordingly, this model provides a new method for predicting container volume for a voyage.
      Citation: Algorithms
      PubDate: 2018-11-28
      DOI: 10.3390/a11120193
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 194: New and Efficient Algorithms for Producing
           Frequent Itemsets with the Map-Reduce Framework

    • Authors: Yaron Gonen, Ehud Gudes, Kirill Kandalov
      First page: 194
      Abstract: The Map-Reduce (MR) framework has become a popular framework for developing new parallel algorithms for Big Data. Efficient algorithms for data mining of big data and distributed databases has become an important problem. In this paper we focus on algorithms producing association rules and frequent itemsets. After reviewing the most recent algorithms that perform this task within the MR framework, we present two new algorithms: one algorithm for producing closed frequent itemsets, and the second one for producing frequent itemsets when the database is updated and new data is added to the old database. Both algorithms include novel optimizations which are suitable to the MR framework, as well as to other parallel architectures. A detailed experimental evaluation shows the effectiveness and advantages of the algorithms over existing methods when it comes to large distributed databases.
      Citation: Algorithms
      PubDate: 2018-11-28
      DOI: 10.3390/a11120194
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 195: Convex-Hull Algorithms: Implementation,
           Testing, and Experimentation

    • Authors: Ask Neve Gamby, Jyrki Katajainen
      First page: 195
      Abstract: From a broad perspective, we study issues related to implementation, testing, and experimentation in the context of geometric algorithms. Our focus is on the effect of quality of implementation on experimental results. More concisely, we study algorithms that compute convex hulls for a multiset of points in the plane. We introduce several improvements to the implementations of the studied algorithms: plane-sweep, torch, quickhull, and throw-away. With a new set of space-efficient implementations, the experimental results—in the integer-arithmetic setting—are different from those of earlier studies. From this, we conclude that utmost care is needed when doing experiments and when trying to draw solid conclusions upon them.
      Citation: Algorithms
      PubDate: 2018-11-28
      DOI: 10.3390/a11120195
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 196: Solon: A Holistic Approach for Modelling,
           Managing and Mining Legal Sources

    • Authors: Marios Koniaris, George Papastefanatos, Ioannis Anagnostopoulos
      First page: 196
      Abstract: Recently there has been an exponential growth of the number of publicly available legal resources. Portals allowing users to search legal documents, through keyword queries, are now widespread. However, legal documents are mainly stored and offered in different sources and formats that do not facilitate semantic machine-readable techniques, thus making difficult for legal stakeholders to acquire, modify or interlink legal knowledge. In this paper, we describe Solon, a legal document management platform. It offers advanced modelling, managing and mining functions over legal sources, so as to facilitate access to legal knowledge. It utilizes a novel method for extracting semantic representations of legal sources from unstructured formats, such as PDF and HTML text files, interlinking and enhancing them with classification features. At the same time, utilizing the structure and specific features of legal sources, it provides refined search results. Finally, it allows users to connect and explore legal resources according to their individual needs. To demonstrate the applicability and usefulness of our approach, Solon has been successfully deployed in a public sector production environment, making Greek tax legislation easily accessible to the public. Opening up legislation in this way will help increase transparency and make governments more accountable to citizens.
      Citation: Algorithms
      PubDate: 2018-12-03
      DOI: 10.3390/a11120196
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 197: Parametric Estimation in the Vasicek-Type
           Model Driven by Sub-Fractional Brownian Motion

    • Authors: Shengfeng Li, Yi Dong
      First page: 197
      Abstract: In the paper, we tackle the least squares estimators of the Vasicek-type model driven by sub-fractional Brownian motion: d X t = ( μ + θ X t ) d t + d S t H , t ≥ 0 with X 0 = 0 , where S H is a sub-fractional Brownian motion whose Hurst index H is greater than 1 2 , and μ ∈ R , θ ∈ R + are two unknown parameters. Based on the so-called continuous observations, we suggest the least square estimators of μ and θ and discuss the consistency and asymptotic distributions of the two estimators.
      Citation: Algorithms
      PubDate: 2018-12-04
      DOI: 10.3390/a11120197
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 198: Damage Identification Algorithm of Hinged
           Joints for Simply Supported Slab Bridges Based on Modified Hinge Plate
           Method and Artificial Bee Colony Algorithms

    • Authors: Hanbing Liu, Xin He, Yubo Jiao
      First page: 198
      Abstract: Hinge joint damage is a typical form of damage occurring in simply supported slab bridges, which can present adverse effects on the overall force distribution of the structure. However, damage identification methods of hinge joint damage are still limited. In this study, a damage identification algorithm for simply supported hinged-slab bridges based on the modified hinge plate method (MHPM) and artificial bee colony (ABC) algorithms was proposed by considering the effect of hinge damage conditions on the lateral load distribution (LLD) of structures. Firstly, MHPM was proposed and demonstrated, which is based on a traditional hinge plate method by introducing relative displacement as a damage factor to simulate hinge joint damage. The effectiveness of MHPM was verified through comparison with the finite element method (FEM). Secondly, damage identification was treated as the inverse problem of calculating the LLD in damage conditions of simply supported slab bridges. Four ABC algorithms were chosen to solve the problem due to its simple structure, ease of implementation, and robustness. Comparisons of convergence speed and identification accuracy with genetic algorithm and particle swarm optimization were also conducted. Finally, hinged bridges composed of four and seven slabs were studied as numerical examples to account for the feasibility and correctness of the proposed method. The simulation results revealed that the proposed algorithm could identify the location and degree of damaged joints efficiently and precisely.
      Citation: Algorithms
      PubDate: 2018-12-04
      DOI: 10.3390/a11120198
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 199: Decision Support Software for Forecasting
           Patient’s Length of Stay

    • Authors: Ioannis E. Livieris, Theodore Kotsilieris, Ioannis Dimopoulos, Panagiotis Pintelas
      First page: 199
      Abstract: Length of stay of hospitalized patients is generally considered to be a significant and critical factor for healthcare policy planning which consequently affects the hospital management plan and resources. Its reliable prediction in the preadmission stage could further assist in identifying abnormality or potential medical risks to trigger additional attention for individual cases. Recently, data mining and machine learning constitute significant tools in the healthcare domain. In this work, we introduce a new decision support software for the accurate prediction of hospitalized patients’ length of stay which incorporates a novel two-level classification algorithm. Our numerical experiments indicate that the proposed algorithm exhibits better classification performance than any examined single learning algorithm. The proposed software was developed to provide assistance to the hospital management and strengthen the service system by offering customized assistance according to patients’ predicted hospitalization time.
      Citation: Algorithms
      PubDate: 2018-12-06
      DOI: 10.3390/a11120199
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 200: Special Issue on Algorithms for the
           Resource Management of Large Scale Infrastructures

    • Authors: Danilo Ardagna, Claudia Canali, Riccardo Lancellotti
      First page: 200
      Abstract: Modern distributed systems are becoming increasingly complex as virtualization is being applied at both the levels of computing and networking. Consequently, the resource management of this infrastructure requires innovative and efficient solutions. This issue is further exacerbated by the unpredictable workload of modern applications and the need to limit the global energy consumption. The purpose of this special issue is to present recent advances and emerging solutions to address the challenge of resource management in the context of modern large-scale infrastructures. We believe that the four papers that we selected present an up-to-date view of the emerging trends, and the papers propose innovative solutions to support efficient and self-managing systems that are able to adapt, manage, and cope with changes derived from continually changing workload and application deployment settings, without the need for human supervision.
      Citation: Algorithms
      PubDate: 2018-12-10
      DOI: 10.3390/a11120200
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 201: A Fast Approach to Texture-Less Object
           Detection Based on Orientation Compressing Map and Discriminative Regional

    • Authors: Hancheng Yu, Haibao Qin, Maoting Peng
      First page: 201
      Abstract: This paper presents a fast algorithm for texture-less object recognition, which is designed to be robust to cluttered backgrounds and small transformations. At its core, the proposed method demonstrates a two-stage template-based procedure using an orientation compressing map and discriminative regional weight (OCM-DRW) to effectively detect texture-less objects. In the first stage, the proposed method quantizes and compresses all the orientations in a neighborhood to obtain the orientation compressing map which then is used to generate a set of possible object locations. To recognize the object in these possible object locations, the second stage computes the similarity of each possible object location with the learned template by using discriminative regional weight, which can effectively distinguish different categories of objects with similar parts. Experiments on publiclyavailable, texture-less object datasets indicate that apart from yielding efficient computational performance, the proposed method also attained remarkable recognition rates surpassing recent state-of-the-art texture-less object detectors in the presence of high-clutter, occlusion and scale-rotation changes. It improves the accuracy and speed by 8% and 370% respectively, relative to the previous best result on D-Textureless dataset.
      Citation: Algorithms
      PubDate: 2018-12-12
      DOI: 10.3390/a11120201
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 202: Classification of Normal and Abnormal
           Regimes in Financial Markets

    • Authors: Jun Chen, Edward P. K. Tsang
      First page: 202
      Abstract: When financial market conditions change, traders adopt different strategies. The traders’ collective behaviour may cause significant changes in the statistical properties of price movements. When this happens, the market is said to have gone through “regime changes”. The purpose of this paper is to characterise what is a “normal market regime” as well as what is an “abnormal market regime”, under observations in Directional Changes (DC). Our study starts with historical data from 10 financial markets. For each market, we focus on a period of time in which significant events could have triggered regime changes. The observations of regime changes in these markets are then positioned in a designed two-dimensional indicator space based on DC. Our results suggest that the normal regimes from different markets share similar statistical characteristics. In other words, with our observations, it is possible to distinguish normal regimes from abnormal regimes. This is significant, because, for the first time, we can tell whether a market is in a normal regime by observing the DC indicators in the market. This opens the door for future work to be able to dynamically monitor the market for regime change.
      Citation: Algorithms
      PubDate: 2018-12-12
      DOI: 10.3390/a11120202
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 203: Finite Difference Algorithm on Non-Uniform
           Meshes for Modeling 2D Magnetotelluric Responses

    • Authors: Xiaozhong Tong, Yujun Guo, Wei Xie
      First page: 203
      Abstract: A finite-difference approach with non-uniform meshes was presented for simulating magnetotelluric responses in 2D structures. We presented the calculation formula of this scheme from the boundary value problem of electric field and magnetic field, and compared finite-difference solutions with finite-element numerical results and analytical solutions of a 1D model. First, a homogeneous half-space model was tested and the finite-difference approach can provide very good accuracy for 2D magnetotelluric modeling. Then we compared them to the analytical solutions for the two-layered geo-electric model; the relative errors of the apparent resistivity and the impedance phase were both increased when the frequency was increased. To conclude, we compare our finite-difference simulation results with COMMEMI 2D-0 model with the finite-element solutions. Both results are in close agreement to each other. These comparisons can confirm the validity and reliability of our finite-difference algorithm. Moreover, a future project will extend the 2D structures to 3D, where non-uniform meshes should perform especially well.
      Citation: Algorithms
      PubDate: 2018-12-14
      DOI: 10.3390/a11120203
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 204: A Novel Method for Risk Assessment and
           Simulation of Collision Avoidance for Vessels based on AIS

    • Authors: ManhCuong Nguyen, Shufang Zhang, Xiaoye Wang
      First page: 204
      Abstract: The identification of risks associated with collision for vessels is an important element in maritime safety and management. A vessel collision avoidance system is a topic that has been deeply studied, and it is a specialization in navigation technology. The automatic identification system (AIS) has been used to support navigation, route estimation, collision prediction, and abnormal traffic detection. This article examined the main elements of ship collision, developed a mathematical model for the risk assessment, and simulated a collision assessment based on AIS information, thereby providing meaningful recommendations for crew training and a warning system, in conjunction with the AIS on board.
      Citation: Algorithms
      PubDate: 2018-12-14
      DOI: 10.3390/a11120204
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 205: Optimal Sliding Mode Control for an Active
           Suspension System Based on a Genetic Algorithm

    • Authors: Chen Zhou, Xinhui Liu, Wei Chen, Feixiang Xu, Bingwei Cao
      First page: 205
      Abstract: In order to improve the dynamic quality of traditional sliding mode control for an active suspension system, an optimal sliding mode control (OSMC) based on a genetic algorithm (GA) is proposed. First, the overall structure and control principle of the active suspension system are introduced. Second, the mathematical model of the quarter car active suspension system is established. Third, a sliding mode control (SMC) controller is designed to manipulate the active force to control the active suspension system. Fourth, GA is applied to optimize the weight coefficients of an SMC switching function and the parameters of the control law. Finally, the simulation model is built based on MATLAB/Simulink (version 2014a), and the simulations are performed and analyzed with the proposed control strategy to identify its performance. The simulation results show that the OSMC controller tuned using a GA has better control performance than the traditional SMC controller.
      Citation: Algorithms
      PubDate: 2018-12-14
      DOI: 10.3390/a11120205
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 206: Optimal Design of Interval Type-2 Fuzzy
           Heart Rate Level Classification Systems Using the Bird Swarm Algorithm

    • Authors: Ivette Miramontes, Juan Carlos Guzman, Patricia Melin, German Prado-Arechiga
      First page: 206
      Abstract: In this paper, the optimal designs of type-1 and interval type-2 fuzzy systems for the classification of the heart rate level are presented. The contribution of this work is a proposed approach for achieving the optimal design of interval type-2 fuzzy systems for the classification of the heart rate in patients. The fuzzy rule base was designed based on the knowledge of experts. Optimization of the membership functions of the fuzzy systems is done in order to improve the classification rate and provide a more accurate diagnosis, and for this goal the Bird Swarm Algorithm was used. Two different type-1 fuzzy systems are designed and optimized, the first one with trapezoidal membership functions and the second with Gaussian membership functions. Once the best type-1 fuzzy systems have been obtained, these are considered as a basis for designing the interval type-2 fuzzy systems, where the footprint of uncertainty was optimized to find the optimal representation of uncertainty. After performing different tests with patients and comparing the classification rate of each fuzzy system, it is concluded that fuzzy systems with Gaussian membership functions provide a better classification than those designed with trapezoidal membership functions. Additionally, tests were performed with the Crow Search Algorithm to carry out a performance comparison, with Bird Swarm Algorithm being the one with the best results.
      Citation: Algorithms
      PubDate: 2018-12-14
      DOI: 10.3390/a11120206
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 207: Trajectory Clustering and k-NN for Robust
           Privacy Preserving Spatiotemporal Databases

    • Authors: Elias Dritsas, Maria Trigka, Panagiotis Gerolymatos, Spyros Sioutas 
      First page: 207
      Abstract: In the context of this research work, we studied the problem of privacy preserving on spatiotemporal databases. In particular, we investigated the k-anonymity of mobile users based on real trajectory data. The k-anonymity set consists of the k nearest neighbors. We constructed a motion vector of the form (x,y,g,v) where x and y are the spatial coordinates, g is the angle direction, and v is the velocity of mobile users, and studied the problem in four-dimensional space. We followed two approaches. The former applied only k-Nearest Neighbor (k-NN) algorithm on the whole dataset, while the latter combined trajectory clustering, based on K-means, with k-NN. Actually, it applied k-NN inside a cluster of mobile users with similar motion pattern (g,v). We defined a metric, called vulnerability, that measures the rate at which k-NNs are varying. This metric varies from 1 k (high robustness) to 1 (low robustness) and represents the probability the real identity of a mobile user being discovered from a potential attacker. The aim of this work was to prove that, with high probability, the above rate tends to a number very close to 1 k in clustering method, which means that the k-anonymity is highly preserved. Through experiments on real spatial datasets, we evaluated the anonymity robustness, the so-called vulnerability, of the proposed method.
      Citation: Algorithms
      PubDate: 2018-12-14
      DOI: 10.3390/a11120207
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 208: On the Use of Learnheuristics in Vehicle
           Routing Optimization Problems with Dynamic Inputs

    • Authors: Quim Arnau, Angel A. Juan, Isabel Serra
      First page: 208
      Abstract: Freight transportation is becoming an increasingly critical activity for enterprises in a global world. Moreover, the distribution activities have a non-negligible impact on the environment, as well as on the citizens’ welfare. The classical vehicle routing problem (VRP) aims at designing routes that minimize the cost of serving customers using a given set of capacitated vehicles. Some VRP variants consider traveling times, either in the objective function (e.g., including the goal of minimizing total traveling time or designing balanced routes) or as constraints (e.g., the setting of time windows or a maximum time per route). Typically, the traveling time between two customers or between one customer and the depot is assumed to be both known in advance and static. However, in real life, there are plenty of factors (predictable or not) that may affect these traveling times, e.g., traffic jams, accidents, road works, or even the weather. In this work, we analyze the VRP with dynamic traveling times. Our work assumes not only that these inputs are dynamic in nature, but also that they are a function of the structure of the emerging routing plan. In other words, these traveling times need to be dynamically re-evaluated as the solution is being constructed. In order to solve this dynamic optimization problem, a learnheuristic-based approach is proposed. Our approach integrates statistical learning techniques within a metaheuristic framework. A number of computational experiments are carried out in order to illustrate our approach and discuss its effectiveness.
      Citation: Algorithms
      PubDate: 2018-12-15
      DOI: 10.3390/a11120208
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 209: Hadoop vs. Spark: Impact on Performance of
           the Hammer Query Engine for Open Data Corpora

    • Authors: Mauro Pelucchi, Giuseppe Psaila, Maurizio Toccu
      First page: 209
      Abstract: The Hammer prototype is a query engine for corpora of Open Data that provides users with the concept of blind querying. Since data sets published on Open Data portals are heterogeneous, users wishing to find out interesting data sets are blind: queries cannot be fully specified, as in the case of databases. Consequently, the query engine is responsible for rewriting and adapting the blind query to the actual data sets, by exploiting lexical and semantic similarity. The effectiveness of this approach was discussed in our previous works. In this paper, we report our experience in developing the query engine. In fact, in the very first version of the prototype, we realized that the implementation of the retrieval technique was too slow, even though corpora contained only a few thousands of data sets. We decided to adopt the Map-Reduce paradigm, in order to parallelize the query engine and improve performances. We passed through several versions of the query engine, either based on the Hadoop framework or on the Spark framework. Hadoop and Spark are two very popular frameworks for writing and executing parallel algorithms based on the Map-Reduce paradigm. In this paper, we present our study about the impact of adopting the Map-Reduce approach and its two most famous frameworks to parallelize the Hammer query engine; we discuss various implementations of the query engine, either obtained without significantly rewriting the algorithm or obtained by completely rewriting the algorithm by exploiting high level abstractions provided by Spark. The experimental campaign we performed shows the benefits provided by each studied solution, with the perspective of moving toward Big Data in the future. The lessons we learned are collected and synthesized into behavioral guidelines for developers approaching the problem of parallelizing algorithms by means of Map-Reduce frameworks.
      Citation: Algorithms
      PubDate: 2018-12-17
      DOI: 10.3390/a11120209
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 210: Multi-Objective Bi-Level Programming for
           the Energy-Aware Integration of Flexible Job Shop Scheduling and Multi-Row

    • Authors: Hongliang Zhang, Haijiang Ge, Ruilin Pan, Yujuan Wu
      First page: 210
      Abstract: The flexible job shop scheduling problem (FJSSP) and multi-row workshop layout problem (MRWLP) are two major focuses in sustainable manufacturing processes. There is a close interaction between them since the FJSSP provides the material handling information to guide the optimization of the MRWLP, and the layout scheme affects the effect of the scheduling scheme by the transportation time of jobs. However, in traditional methods, they are regarded as separate tasks performed sequentially, which ignores the interaction. Therefore, developing effective methods to deal with the multi-objective energy-aware integration of the FJSSP and MRWLP (MEIFM) problem in a sustainable manufacturing system is becoming more and more important. Based on the interaction between FJSSP and MRWLP, the MEIFM problem can be formulated as a multi-objective bi-level programming (MOBLP) model. The upper-level model for FJSSP is employed to minimize the makespan and total energy consumption, while the lower-level model for MRWLP is used to minimize the material handling quantity. Because the MEIFM problem is denoted as a mixed integer non-liner programming model, it is difficult to solve it using traditional methods. Thus, this paper proposes an improved multi-objective hierarchical genetic algorithm (IMHGA) to solve this model. Finally, the effectiveness of the method is verified through comparative experiments.
      Citation: Algorithms
      PubDate: 2018-12-17
      DOI: 10.3390/a11120210
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 211: A Connection Between the Kalman Filter and
           an Optimized LMS Algorithm for Bilinear Forms

    • Authors: Laura-Maria Dogariu, Silviu Ciochină, Constantin Paleologu, Jacob Benesty
      First page: 211
      Abstract: The system identification problem becomes more challenging when the parameter space increases. Recently, several works have focused on the identification of bilinear forms, which are related to the impulse responses of a spatiotemporal model, in the context of a multiple-input/ single-output system. In this framework, the problem was addressed in terms of the Wiener filter and different basic adaptive algorithms. This paper studies two types of algorithms tailored for the identification of such bilinear forms, i.e., the Kalman filter (along with its simplified version) and an optimized least-mean-square (LMS) algorithm. Also, a comparison between them is performed, which shows interesting similarities. In addition to the mathematical derivation of the algorithms, we also provide extensive experimental results, which support the theoretical findings and indicate the good performance of the proposed solutions.
      Citation: Algorithms
      PubDate: 2018-12-17
      DOI: 10.3390/a11120211
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 212: Evaluating Algorithm Efficiency for
           Optimizing Experimental Designs with Correlated Data

    • Authors: Lazarus K. Mramba, Salvador A. Gezan
      First page: 212
      Abstract: The search for efficient methods and procedures to optimize experimental designs is a vital process in field trials that is often challenged by computational bottlenecks. Most existing methods ignore the presence of some form of correlations in the data to simplify the optimization process at the design stage. This study explores several algorithms for improving field experimental designs using a linear mixed models statistical framework adjusting for both spatial and genetic correlations based on A- and D-optimality criteria. Relative design efficiencies are estimated for an array of algorithms including pairwise swap, genetic neighborhood, and simulated annealing and evaluated with varying levels of heritabilities, spatial and genetic correlations. Initial randomized complete block designs were generated using a stochastic procedure and can also be imported directly from other design software. Results showed that at a spatial correlation of 0.6 and a heritability of 0.3, under the A-optimality criterion, both simulated annealing and simple pairwise algorithms achieved the highest design efficiencies of 7.4 % among genetically unrelated individuals, implying a reduction in average variance of the random treatment effects by 7.4 % when the algorithm was iterated 5000 times. In contrast, results under D-optimality criterion indicated that simulated annealing had the lowest design efficiency. The simple pairwise algorithm consistently maintained highest design efficiencies in all evaluated conditions. Design efficiencies for experiments with full-sib families decreased with increasing heritability. The number of successful swaps appeared to decrease with increasing heritability and were highest for both simulated annealing and simple pairwise algorithms, and lowest for genetic neighborhood algorithm.
      Citation: Algorithms
      PubDate: 2018-12-18
      DOI: 10.3390/a11120212
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 213: Parallel Reservoir Simulation with OpenACC
           and Domain Decomposition

    • Authors: Zhijiang Kang, Ze Deng, Wei Han, Dongmei Zhang
      First page: 213
      Abstract: Parallel reservoir simulation is an important approach to solving real-time reservoir management problems. Recently, there is a new trend of using a graphics processing unit (GPU) to parallelize the reservoir simulations. Current GPU-aided reservoir simulations focus on compute unified device architecture (CUDA). Nevertheless, CUDA is not functionally portable across devices and incurs high amount of code. Meanwhile, domain decomposition is not well used for GPU-based reservoir simulations. In order to address the problems, we propose a parallel method with OpenACC to accelerate serial code and reduce the time and effort during porting an application to GPU. Furthermore, the GPU-aided domain decomposition is developed to accelerate the efficiency of reservoir simulation. The experimental results indicate that (1) the proposed GPU-aided approach can outperform the CPU-based one up to about two times, meanwhile with the help of OpenACC, the workload of the transplant code was reduced significantly by about 22 percent of the source code, (2) the domain decomposition method can further improve the execution efficiency up to 1.7×. The proposed parallel reservoir simulation method is a efficient tool to accelerate reservoir simulation.
      Citation: Algorithms
      PubDate: 2018-12-18
      DOI: 10.3390/a11120213
      Issue No: Vol. 11, No. 12 (2018)
  • Algorithms, Vol. 11, Pages 165: Efficient Algorithms for Computing the
           Inner Edit Distance of a Regular Language via Transducers

    • Authors: Lila Kari, Stavros Konstantinidis, Steffen Kopecki, Meng Yang
      First page: 165
      Abstract: The concept of edit distance and its variants has applications in many areas such as computational linguistics, bioinformatics, and synchronization error detection in data communications. Here, we revisit the problem of computing the inner edit distance of a regular language given via a Nondeterministic Finite Automaton (NFA). This problem relates to the inherent maximal error-detecting capability of the language in question. We present two efficient algorithms for solving this problem, both of which execute in time O ( r 2 n 2 d ) , where r is the cardinality of the alphabet involved, n is the number of transitions in the given NFA, and d is the computed edit distance. We have implemented one of the two algorithms and present here a set of performance tests. The correctness of the algorithms is based on the connection between word distances and error detection and the fact that nondeterministic transducers can be used to represent the errors (resp., edit operations) involved in error-detection (resp., in word distances).
      Citation: Algorithms
      PubDate: 2018-10-23
      DOI: 10.3390/a11110165
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 166: High-Gain Observer-Based Sliding-Mode
           Dynamic Surface Control for Particleboard Glue Mixing and Dosing System

    • Authors: Peiyu Wang, Chunrui Zhang, Liangkuan Zhu, Chengcheng Wang
      First page: 166
      Abstract: In the process of particleboard glue mixing and dosing control under the working condition of intermediate frequency, a sliding-mode dynamic surface control strategy based on high-gain observer is proposed in this paper to deal with the problem of glue flow stability caused by strong nonlinearity. The high-gain observer (HGO) is aimed at estimating the derivative of the immeasurable system input signal for feedback, and the robustness of the system is improved by the dynamic surface control (DSC) method. Furthermore, the sliding-mode control (SMC) method is used to deal with disturbances caused by the uncertainties as well as external disturbances. It is proven that the system is exponential asymptotic stable by constructing a suitable Lyapunov function. Simulation results show that the proposed control methods can make the system track the expected flow value quickly and accurately. Finally, numerical simulation results are exhibited to authenticate and validate the effectiveness of the proposed control scheme.
      Citation: Algorithms
      PubDate: 2018-10-23
      DOI: 10.3390/a11110166
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 167: Online Adaptive Parameter Estimation for

    • Authors: Jun Zhao, Xian Wang, Guanbin Gao, Jing Na, Hongping Liu, Fujin Luan
      First page: 167
      Abstract: The stability and robustness of quadrotors are always influenced by unknown or immeasurable system parameters. This paper proposes a novel adaptive parameter estimation technology to obtain high-accuracy parameter estimation for quadrotors. A typical mathematical model of quadrotors is first obtained, which can be used for parameter estimation. Then, an expression of the parameter estimation error is derived by introducing a set of auxiliary filtered variables. Moreover, an augmented matrix is constructed based on the obtained auxiliary filtered variables, which is then used to design new adaptive laws to achieve exponential convergence under the standard persistent excitation (PE) condition. Finally, a simulation and an experimental verification for a typical quadrotor system are shown to illustrate the effectiveness of the proposed method.
      Citation: Algorithms
      PubDate: 2018-10-25
      DOI: 10.3390/a11110167
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 168: Fractional Order Sliding Mode Control of a
           Class of Second Order Perturbed Nonlinear Systems: Application to the
           Trajectory Tracking of a Quadrotor

    • Authors: Arturo Govea-Vargas, Rafael Castro-Linares, Manuel A. Duarte-Mermoud, Norelys Aguila-Camacho, Gustavo E. Ceballos-Benavides
      First page: 168
      Abstract: A Fractional Order Sliding Mode Control (FOSMC) is proposed in this paper for an integer second order nonlinear system with an unknown additive perturbation term. A sufficient condition is given to assure the attractiveness to a given sliding surface where trajectory tracking is assured, despite the presence of the perturbation term. The control scheme is applied to the model of a quadrotor vehicle in order to have trajectory tracking in the space. Simulation results are presented to evaluate the performance of the control scheme.
      Citation: Algorithms
      PubDate: 2018-10-26
      DOI: 10.3390/a11110168
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 169: Parameter Estimation of a Class of Neural
           Systems with Limit Cycles

    • Authors: Xuyang Lou, Xu Cai, Baotong Cui
      First page: 169
      Abstract: This work addresses parameter estimation of a class of neural systems with limit cycles. An identification model is formulated based on the discretized neural model. To estimate the parameter vector in the identification model, the recursive least-squares and stochastic gradient algorithms including their multi-innovation versions by introducing an innovation vector are proposed. The simulation results of the FitzHugh–Nagumo model indicate that the proposed algorithms perform according to the expected effectiveness.
      Citation: Algorithms
      PubDate: 2018-10-26
      DOI: 10.3390/a11110169
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 170: A Machine Learning View on Momentum and
           Reversal Trading

    • Authors: Zhixi Li, Vincent Tam
      First page: 170
      Abstract: Momentum and reversal effects are important phenomena in stock markets. In academia, relevant studies have been conducted for years. Researchers have attempted to analyze these phenomena using statistical methods and to give some plausible explanations. However, those explanations are sometimes unconvincing. Furthermore, it is very difficult to transfer the findings of these studies to real-world investment trading strategies due to the lack of predictive ability. This paper represents the first attempt to adopt machine learning techniques for investigating the momentum and reversal effects occurring in any stock market. In the study, various machine learning techniques, including the Decision Tree (DT), Support Vector Machine (SVM), Multilayer Perceptron Neural Network (MLP), and Long Short-Term Memory Neural Network (LSTM) were explored and compared carefully. Several models built on these machine learning approaches were used to predict the momentum or reversal effect on the stock market of mainland China, thus allowing investors to build corresponding trading strategies. The experimental results demonstrated that these machine learning approaches, especially the SVM, are beneficial for capturing the relevant momentum and reversal effects, and possibly building profitable trading strategies. Moreover, we propose the corresponding trading strategies in terms of market states to acquire the best investment returns.
      Citation: Algorithms
      PubDate: 2018-10-26
      DOI: 10.3390/a11110170
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 171: Intelligent Dynamic Backlash Agent: A
           Trading Strategy Based on the Directional Change Framework

    • Authors: Amer Bakhach, Venkata L. Raju Chinthalapati, Edward P. K. Tsang, Abdul Rahman El Sayed
      First page: 171
      Abstract: The Directional Changes (DC) framework is an approach to summarize price movement in financial time series. Some studies have tried to develop trading strategies based on the DC framework. Dynamic Backlash Agent (DBA) is a trading strategy that has been developed based on the DC framework. Despite the promising results of DBA, DBA employed neither an order size management nor risk management components. In this paper, we present an improved version of DBA named Intelligent DBA (IDBA). IDBA overcomes the weaknesses of DBA as it embraces an original order size management and risk management modules. We examine the performance of IDBA in the forex market. The results suggest that IDBA can provide significantly greater returns than DBA. The results also show that the IDBA outperforms another DC-based trading strategy and that it can generate annualized returns of about 30% after deducting the bid and ask spread (but not the transaction costs).
      Citation: Algorithms
      PubDate: 2018-10-28
      DOI: 10.3390/a11110171
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 172: Bidirectional Grid Long Short-Term Memory
           (BiGridLSTM): A Method to Address Context-Sensitivity and Vanishing

    • Authors: Hongxiao Fei, Fengyun Tan
      First page: 172
      Abstract: The Recurrent Neural Network (RNN) utilizes dynamically changing time information through time cycles, so it is very suitable for tasks with time sequence characteristics. However, with the increase of the number of layers, the vanishing gradient occurs in the RNN. The Grid Long Short-Term Memory (GridLSTM) recurrent neural network can alleviate this problem in two dimensions by taking advantage of the two dimensions calculated in time and depth. In addition, the time sequence task is related to the information of the current moment before and after. In this paper, we propose a method that takes into account context-sensitivity and gradient problems, namely the Bidirectional Grid Long Short-Term Memory (BiGridLSTM) recurrent neural network. This model not only takes advantage of the grid architecture, but it also captures information around the current moment. A large number of experiments on the dataset LibriSpeech show that BiGridLSTM is superior to other deep LSTM models and unidirectional LSTM models, and, when compared with GridLSTM, it gets about 26 percent gain improvement.
      Citation: Algorithms
      PubDate: 2018-10-30
      DOI: 10.3390/a11110172
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 173: Inapproximability of Rank, Clique,
           Boolean, and Maximum Induced Matching-Widths under Small Set Expansion

    • Authors: Koichi Yamazaki
      First page: 173
      Abstract: Wu et al. (2014) showed that under the small set expansion hypothesis (SSEH) there is no polynomial time approximation algorithm with any constant approximation factor for several graph width parameters, including tree-width, path-width, and cut-width (Wu et al. 2014). In this paper, we extend this line of research by exploring other graph width parameters: We obtain similar approximation hardness results under the SSEH for rank-width and maximum induced matching-width, while at the same time we show the approximation hardness of carving-width, clique-width, NLC-width, and boolean-width. We also give a simpler proof of the approximation hardness of tree-width, path-width, and cut-widththan that of Wu et al.
      Citation: Algorithms
      PubDate: 2018-10-31
      DOI: 10.3390/a11110173
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 174: Local Coupled Extreme Learning Machine
           Based on Particle Swarm Optimization

    • Authors: Hongli Guo, Bin Li, Wei Li, Fengjuan Qiao, Xuewen Rong, Yibin Li
      First page: 174
      Abstract: We developed a new method of intelligent optimum strategy for a local coupled extreme learning machine (LC-ELM). In this method, both the weights and biases between the input layer and the hidden layer, as well as the addresses and radiuses in the local coupled parameters, are determined and optimized based on the particle swarm optimization (PSO) algorithm. Compared with extreme learning machine (ELM), LC-ELM and extreme learning machine based on particle optimization (PSO-ELM) that have the same network size or compact network configuration, simulation results in terms of regression and classification benchmark problems show that the proposed algorithm, which is called LC-PSO-ELM, has improved generalization performance and robustness.
      Citation: Algorithms
      PubDate: 2018-11-01
      DOI: 10.3390/a11110174
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 175: The Bias Compensation Based Parameter and
           State Estimation for Observability Canonical State-Space Models with
           Colored Noise

    • Authors: Xuehai Wang, Feng Ding, Qingsheng Liu, Chuntao Jiang
      First page: 175
      Abstract: This paper develops a bias compensation-based parameter and state estimation algorithm for the observability canonical state-space system corrupted by colored noise. The state-space system is transformed into a linear regressive model by eliminating the state variables. Based on the determination of the noise variance and noise model, a bias correction term is added into the least squares estimate, and the system parameters and states are computed interactively. The proposed algorithm can generate the unbiased parameter estimate. Two illustrative examples are given to show the effectiveness of the proposed algorithm.
      Citation: Algorithms
      PubDate: 2018-11-01
      DOI: 10.3390/a11110175
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 176: Towards the Verbal Decision Analysis
           Paradigm for Implementable Prioritization of Software Requirements

    • Authors: Paulo Alberto Melo Barbosa, Plácido Rogério Pinheiro, Francisca Raquel de Vasconcelos Silveira
      First page: 176
      Abstract: The activity of prioritizing software requirements should be done as efficiently as possible. Selecting the most stable requirements for the most important customers of a development company can be a positive factor considering that available resources do not always encompass the implementation of all requirements. There are many quantitative methods for prioritization of software releases in the field of search-based software engineering (SBSE). However, we show that it is possible to use qualitative verbal decision analysis (VDA) methods to solve this type of problem. Moreover, we will use the ZAPROS III-i method to prioritize requirements considering the opinion of the decision-maker, who will participate in this process. Results obtained using VDA structured methods were found to be quite satisfactory when compared to methods using SBSE. A comparison of results between quantitative and qualitative methods will be made and discussed later. The results were reviewed and corroborated with the use of performance metrics.
      Citation: Algorithms
      PubDate: 2018-11-03
      DOI: 10.3390/a11110176
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 177: Understanding and Enhancement of Internal
           Clustering Validation Indexes for Categorical Data

    • Authors: Xuedong Gao, Minghan Yang
      First page: 177
      Abstract: Clustering is one of the main tasks of machine learning. Internal clustering validation indexes (CVIs) are used to measure the quality of several clustered partitions to determine the local optimal clustering results in an unsupervised manner, and can act as the objective function of clustering algorithms. In this paper, we first studied several well-known internal CVIs for categorical data clustering, and proved the ineffectiveness of evaluating the partitions of different numbers of clusters without any inter-cluster separation measures or assumptions; the accurateness of separation, along with its coordination with the intra-cluster compactness measures, can notably affect performance. Then, aiming to enhance the internal clustering validation measurement, we proposed a new internal CVI—clustering utility based on the averaged information gain of isolating each cluster (CUBAGE)—which measures both the compactness and the separation of the partition. The experimental results supported our findings with regard to the existing internal CVIs, and showed that the proposed CUBAGE outperforms other internal CVIs with or without a pre-known number of clusters.
      Citation: Algorithms
      PubDate: 2018-11-04
      DOI: 10.3390/a11110177
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 178: Deep Directional Network for Object

    • Authors: Zhaohua Hu, Xiaoyi Shi
      First page: 178
      Abstract: Existing object trackers are mostly based on correlation filtering and neural network frameworks. Correlation filtering is fast but has poor accuracy. Although a neural network can achieve high precision, a large amount of computation increases the tracking time. To address this problem, we utilize a convolutional neural network (CNN) to learn object direction. We propose a target direction classification network based on CNNs that has a directional shortcut to the tracking target, unlike the particle filter that randomly finds the target. Our network uses an end-to-end approach to determine scale variation that has good robustness to scale variation sequences. In the pretraining stage, the Visual Object Tracking Challenges (VOT) dataset is used to train the network for learning positive and negative sample classification and direction classification. In the online tracking stage, the sliding window operation is performed by using the obtained directional information to determine the exact position of the object. The network only calculates a single sample, which guarantees a low computational burden. The positive and negative sample redetection strategies can successfully ensure that the samples are not lost. The one-pass evaluation (OPE) evaluation results of the object tracking benchmark (OTB) demonstrate that the algorithm is very robust and is also faster than several deep trackers.
      Citation: Algorithms
      PubDate: 2018-11-05
      DOI: 10.3390/a11110178
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 179: A Reciprocal-Selection-Based
           ‘Win–Win’ Overlay Spectrum-Sharing Scheme for
           Device-to-Device-Enabled Cellular Network

    • Authors: Peng Li, Chenchen Shu, Jiao Feng
      First page: 179
      Abstract: This paper proposes a reciprocal-selection-based ‘Win–Win’ overlay spectrum-sharing scheme for device-to-Device-enabled cellular networks to address the resource sharing between Device-to-Device devices and the cellular users by using an overlay approach. Based on the proposed scheme, the cell edge users intend to lease part of its spectrum resource to Device-to-Device transmission pairs. However, the Device-to-Device users have to provide the cooperative transmission assistance for the cell edge users in order to improve the Quality of Service of the uplink transmission from the cell edge users to the base station. Compared to the underlay spectrum-sharing scheme, overlay spectrum-sharing scheme may reduce spectrum efficiency. Hence, Non-Orthogonal Multiple Access technology is invoked at the Device-to-Device transmitter in order to improve the spectrum efficiency. The Stackelberg game is exploited to model the behaviours of the cell edge users and Device-to-Device devices. Moreover, based on matching theory, the cell edge users and Device-to-Device pairs form one-to-one matching and the stability of matching is analysed. The simulation results show that the proposed reciprocal-selection-based ‘Win–Win’ overlay spectrum-sharing scheme is capable of providing considerable rate improvements for both EUs and D2D pairs and reducing transmit power dissipated by the D2D transmitter to forward data for the EU compared with the existing methods.
      Citation: Algorithms
      PubDate: 2018-11-06
      DOI: 10.3390/a11110179
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 180: Iterative Identification for Multivariable
           Systems with Time-Delays Based on Basis Pursuit De-Noising and Auxiliary

    • Authors: Junyao You, Yanjun Liu
      First page: 180
      Abstract: This paper focuses on the joint estimation of parameters and time-delays of the multiple-input single-output output-error systems. Since the time-delays are unknown, an effective identification model with a high dimensional and sparse parameter vector is established based on overparameterization. Then, the identification problem is converted to a sparse optimization problem. Based on the basis pursuit de-noising criterion and the auxiliary model identification idea, an auxiliary model based basis pursuit de-noising iterative algorithm is presented. The parameters are estimated by solving a quadratic program, and the unavailable terms in the information vector are updated by the auxiliary model outputs iteratively. The time-delays are estimated according to the sparse structure of the parameter vector. The proposed method can obtain effective estimates of the parameters and time-delays from few sampled data. The simulation results illustrate the effectiveness of the proposed algorithm.
      Citation: Algorithms
      PubDate: 2018-11-06
      DOI: 10.3390/a11110180
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 181: Measuring the Impact of Financial News and

    • Authors: Foteini Kollintza-Kyriakoulia, Manolis Maragoudakis, Anastasia Krithara
      First page: 181
      Abstract: In this work, we study the task of predicting the closing price of the following day of a stock, based on technical analysis, news articles and public opinions. The intuition of this study lies in the fact that technical analysis contains information about the event, but not the cause of the change, while data like news articles and public opinions may be interpreted as a cause. The paper uses time series analysis techniques such as Symbolic Aggregate Approximation (SAX) and Dynamic Time Warping (DTW) to study the existence of a relation between price data and textual information, either from news or social media. Pattern matching techniques from time series data are also incorporated, in order to experimentally validate potential correlations of price and textual information within given time periods. The ultimate goal is to create a forecasting model that exploits the previously discovered patterns in order to augment the forecasting accuracy. Results obtained from the experimental phase are promising. The performance of the classifier shows clear signs of improvement and robustness within the time periods where patterns between stock price and the textual information have been identified, compared to the periods where patterns did not exist.
      Citation: Algorithms
      PubDate: 2018-11-06
      DOI: 10.3390/a11110181
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 182: An Algorithm for Interval-Valued
           Intuitionistic Fuzzy Preference Relations in Group Decision Making Based
           on Acceptability Measurement and Priority Weight Determination

    • Authors: Hua Zhuang, Yanzhao Tang, Meijuan Li
      First page: 182
      Abstract: Group decision making with intuitionistic fuzzy preference information contains two key issues: acceptability measurement and priority weight determination. In this paper, we investigate the above two issues with respect to multiplicative interval-valued intuitionistic fuzzy preference relation (IVIFPR). Firstly, a consistency index is defined to measure the multiplicative consistency degree of IVIFPR and an optimization model is established to improve the consistency degree of IVIFPR to an acceptable one. Next, in terms of priority weight determination, an error-analysis-based extension method is proposed to obtain priority weight vector from the acceptable IVIFPR. For GDM problems, decision makers’ weights are derived by the proposed multiplicative consistency index. Subsequently, the collective IVIFPR is obtained by using an interval-valued intuitionistic fuzzy (IVIF) weighted averaging operator. Finally, a step-by step algorithm for GDM with IVIFPRs is given, and an example of enterprise innovation partner selection is analyzed, and comparative analyses with existing approaches are performed to demonstrate that the proposed algorithm is both effective and practical in dealing with GDM problems.
      Citation: Algorithms
      PubDate: 2018-11-06
      DOI: 10.3390/a11110182
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 183: Virtual Belt Algorithm for the Management
           of Isolated Autonomous Intersection

    • Authors: Chentong Bian, Guodong Yin, Liwei Xu, Ning Zhang
      First page: 183
      Abstract: To enhance traffic efficiency, in this paper, a novel virtual belt algorithm is proposed for the management of an isolated autonomous intersection. The proposed virtual belt algorithm consists of an offline algorithm and an online algorithm. Using the offline algorithm, the considered intersection can be modeled as several virtual belts. The online algorithm is designed for the real-time application of the virtual belt algorithm. Compared with the related algorithms, the main advantage of the proposed algorithm is that, there are several candidate trajectories for each approaching vehicle. Thus, there are more opportunities for an approaching vehicle to obtain a permission to pass an intersection, which is effective to improve traffic efficiency. The proposed algorithm is validated using numerical simulations conducted by Matlab and VISSIM. The simulation results show that the proposed algorithm is effective for autonomous intersection management.
      Citation: Algorithms
      PubDate: 2018-11-08
      DOI: 10.3390/a11110183
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 184: Weak Fault Detection of Tapered Rolling
           Bearing Based on Penalty Regularization Approach

    • Authors: Qing Li, Steven Y. Liang
      First page: 184
      Abstract: Aimed at the issue of estimating the fault component from a noisy observation, a novel detection approach based on augmented Huber non-convex penalty regularization (AHNPR) is proposed. The core objectives of the proposed method are that (1) it estimates non-zero singular values (i.e., fault component) accurately and (2) it maintains the convexity of the proposed objective cost function (OCF) by restricting the parameters of the non-convex regularization. Specifically, the AHNPR model is expressed as the L1-norm minus a generalized Huber function, which avoids the underestimation weakness of the L1-norm regularization. Furthermore, the convexity of the proposed OCF is proved via the non-diagonal characteristic of the matrix BTB, meanwhile, the non-zero singular values of the OCF is solved by the forward–backward splitting (FBS) algorithm. Last, the proposed method is validated by the simulated signal and vibration signals of tapered bearing. The results demonstrate that the proposed approach can identify weak fault information from the raw vibration signal under severe background noise, that the non-convex penalty regularization can induce sparsity of the singular values more effectively than the typical convex penalty (e.g., L1-norm fused lasso optimization (LFLO) method), and that the issue of underestimating sparse coefficients can be improved.
      Citation: Algorithms
      PubDate: 2018-11-08
      DOI: 10.3390/a11110184
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 185: An Algorithmic Look at Financial

    • Authors: Lin Ma, Jean-Paul Delahaye
      First page: 185
      Abstract: In this paper, we attempt to give an algorithmic explanation to volatility clustering, one of the most exploited stylized facts in finance. Our analysis with daily data from five exchanges shows that financial volatilities follow Levin’s universal distribution Kirchherr et al. (1997) once transformed into equally proportional binary strings. Frequency ranking of binary trading weeks coincides with that of their Kolmogorov complexity estimated byDelahaye et al. (2012). According to Levin’s universal distribution, large (resp. small) volatilities are more likely to be followed by large (resp. small) ones since simple trading weeks such as “00000” or “11111” are much more frequently observed than complex ones such as “10100” or “01011”. Thus, volatility clusters may not be attributed to behavioral or micro-structural assumptions but to the complexity discrepancy between finite strings. This property of financial data could be at the origin of volatility autocorrelation, though autocorrelated volatilities simulated from Generalized Auto-Regressive Conditional Heteroskedacity (hereafter GARCH) cannot be transformed into universally distributed binary weeks.
      Citation: Algorithms
      PubDate: 2018-11-13
      DOI: 10.3390/a11110185
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 186: Pricing Strategies of Logistics
           Distribution Services for Perishable Commodities

    • Authors: Tao Li, Yan Chen, Taoying Li
      First page: 186
      Abstract: The problem of pricing distribution services is challenging due to the loss in value of product during its distribution process. Four logistics service pricing strategies are constructed in this study, including fixed pricing model, fixed pricing model with time constraints, dynamic pricing model, and dynamic pricing model with time constraints in combination with factors, such as the distribution time, customer satisfaction, optimal pricing, etc. By analyzing the relationship between optimal pricing and key parameters (such as the value of the decay index, the satisfaction of consumers, dispatch time, and the storage cost of the commodity), it is found that the larger the value of the attenuation coefficient, the easier the perishable goods become spoilage, which leads to lower distribution prices and impacts consumer satisfaction. Moreover, the analysis of the average profit of the logistics service providers in these four pricing models shows that the average profit in the dynamic pricing model with time constraints is better. Finally, a numerical experiment is given to support the findings.
      Citation: Algorithms
      PubDate: 2018-11-17
      DOI: 10.3390/a11110186
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 187: Special Issue on Reconfiguration Problems

    • Authors: Faisal Abu-Khzam, Henning Fernau, Ryuhei Uehara
      First page: 187
      Abstract: The study of reconfiguration problems has grown into a field of its own. The basic idea is to consider the scenario of moving from one given (feasible) solution to another, maintaining feasibility for all intermediate solutions. The solution space is often represented by a “reconfiguration graph”, where vertices represent solutions to the problem in hand and an edge between two vertices means that one can be obtained from the other in one step. A typical application background would be for a reorganization or repair work that has to be done without interruption to the service that is provided.
      Citation: Algorithms
      PubDate: 2018-11-19
      DOI: 10.3390/a11110187
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 188: Differential-Evolution-Based Coevolution

    • Authors: Xiangyin Zhang, Yuying Xue, Xingyang Lu, Songmin Jia
      First page: 188
      Abstract: Learning the Bayesian networks (BNs) structure from data has received increasing attention. Many heuristic algorithms have been introduced to search for the optimal network that best matches the given training data set. To further improve the performance of ant colony optimization (ACO) in learning the BNs structure, this paper proposes a new improved coevolution ACO (coACO) algorithm, which uses the pheromone information as the cooperative factor and the differential evolution (DE) as the cooperative strategy. Different from the basic ACO, the coACO divides the entire ant colony into various sub-colonies (groups), among which DE operators are adopted to implement the cooperative evolutionary process. Experimental results demonstrate that the proposed coACO outperforms the basic ACO in learning the BN structure in terms of convergence and accuracy.
      Citation: Algorithms
      PubDate: 2018-11-19
      DOI: 10.3390/a11110188
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 189: Vibration Suppression of a Flexible-Joint
           Robot Based on Parameter Identification and Fuzzy PID Control

    • Authors: Jinyong Ju, Yongrui Zhao, Chunrui Zhang, Yufei Liu
      First page: 189
      Abstract: In order to eliminate the influence of the joint torsional vibration on the system operation accuracy, the parameter identification and the elastic torsional vibration control of a flexible-joint robot are studied. Firstly, the flexible-joint robot system is equivalent to a rotor dynamic system, in which the mass block and the torsion spring are used to simulate the system inertia link and elasticity link, for establishing the system dynamic model, and the experimental prototype is constructed. Then, based on the mechanism method, the global electromechanical-coupling dynamic model of the flexible-joint robot system is constructed to clear and define the mapping relationship between the driving voltage of the DC motor and the rotational speed of joint I and joint II. Furthermore, in view of the contradiction between the system response speed and the system overshoot in the vibration suppression effect of the conventional PID controller, a fuzzy PID controller, whose parameters are determined by the different requirements in the vibration control process, is designed to adjust the driving voltage of the DC motor for attenuating the system torsional vibration. Finally, simulation and control experiments are carried out and the results show that the designed fuzzy PID controller can effectively suppress the elastic torsional vibration of the flexible-joint robot system with synchronization optimization of control accuracy and dynamic quality.
      Citation: Algorithms
      PubDate: 2018-11-20
      DOI: 10.3390/a11110189
      Issue No: Vol. 11, No. 11 (2018)
  • Algorithms, Vol. 11, Pages 150: SLoPCloud: An Efficient Solution for
           Locality Problem in Peer-to-Peer Cloud Systems

    • Authors: Mohammed  Gharib, Marzieh Malekimajd, Ali Movaghar
      First page: 150
      Abstract: Peer-to-Peer (P2P) cloud systems are becoming more popular due to the high computational capability, scalability, reliability, and efficient data sharing. However, sending and receiving a massive amount of data causes huge network traffic leading to significant communication delays. In P2P systems, a considerable amount of the mentioned traffic and delay is owing to the mismatch between the physical layer and the overlay layer, which is referred to as locality problem. To achieve higher performance and consequently resilience to failures, each peer has to make connections to geographically closer peers. To the best of our knowledge, locality problem is not considered in any well known P2P cloud system. However, considering this problem could enhance the overall network performance by shortening the response time and decreasing the overall network traffic. In this paper, we propose a novel, efficient, and general solution for locality problem in P2P cloud systems considering the round-trip-time (RTT). Furthermore, we suggest a flexible topology as the overlay graph to address the locality problem more effectively. Comprehensive simulation experiments are conducted to demonstrate the applicability of the proposed algorithm in most of the well-known P2P overlay networks while not introducing any serious overhead.
      Citation: Algorithms
      PubDate: 2018-10-02
      DOI: 10.3390/a11100150
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 151: K-Means Cloning: Adaptive Spherical
           K-Means Clustering

    • Authors: Abdel-Rahman Hedar, Abdel-Monem M. Ibrahim, Alaa E. Abdel-Hakim, Adel A.  Sewisy
      First page: 151
      Abstract: We propose a novel method for adaptive K-means clustering. The proposed method overcomes the problems of the traditional K-means algorithm. Specifically, the proposed method does not require prior knowledge of the number of clusters. Additionally, the initial identification of the cluster elements has no negative impact on the final generated clusters. Inspired by cell cloning in microorganism cultures, each added data sample causes the existing cluster `colonies’ to evaluate, with the other clusters, various merging or splitting actions in order for reaching the optimum cluster set. The proposed algorithm is adequate for clustering data in isolated or overlapped compact spherical clusters. Experimental results support the effectiveness of this clustering algorithm.
      Citation: Algorithms
      PubDate: 2018-10-06
      DOI: 10.3390/a11100151
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 152: Accelerated Iterative Learning Control of
           Speed Ripple Suppression for a Seeker Servo Motor

    • Authors: Dongqi Ma, Hui Lin
      First page: 152
      Abstract: To suppress the speed ripple of a permanent magnet synchronous motor in a seeker servo system, we propose an accelerated iterative learning control with an adjustable learning interval. First, according to the error of current iterative learning for the system, we determine the next iterative learning interval and conduct real-time correction on the learning gain. For the learning interval, as the number of iterations increases, the actual interval that needs correction constantly shortens, accelerating the convergence speed. Second, we analyze the specific structure of the controller while applying reasonable assumptions pertaining to its condition. Using the λ-norm, we analyze and apply our mathematical knowledge to obtain a strict mathematical proof on the P-type iterative learning control and obtain the condition of convergence for the controller. Finally, we apply the proposed method for periodic ripple inhibition of the torque rotation speed of the permanent magnet synchronous motor and establish the system model; we use the periodic load torque to simulate the ripple torque of the synchronous motor. The simulation and experimental results indicate the effectiveness of the method.
      Citation: Algorithms
      PubDate: 2018-10-10
      DOI: 10.3390/a11100152
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 153: Chronotype, Risk and Time Preferences, and
           Financial Behaviour

    • Authors: Di Wang, Frank McGroarty, Eng-Tuck Cheah
      First page: 153
      Abstract: This paper examines the effect of chronotype on the delinquent credit card payments and stock market participation through preference channels. Using an online survey of 455 individuals who have been working for 3 to 8 years in companies in mainland China, the results reveal that morningness is negatively associated with delinquent credit card payments. Morningness also indirectly predicts delinquent credit card payments through time preference, but this relationship only exists when individuals’ monthly income is at a low and average level. On the other hand, financial risk preference accounts for the effect of morningness on stock market participation. Consequently, an additional finding is that morningness is positively associated with financial risk preference, which contradicts previous findings in the literature. Finally, based on the empirical evidence, we discuss the plausible mechanisms that may drive these relationships and the implications for theory and practice. The current study contributes to the literature by examining the links between circadian typology and particular financial behaviour of experienced workers.
      Citation: Algorithms
      PubDate: 2018-10-10
      DOI: 10.3390/a11100153
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 154: Two Hesitant Multiplicative
           Decision-Making Algorithms and Their Application to Fog-Haze Factor
           Assessment Problem

    • Authors: Lidan Pei, Feifei Jin
      First page: 154
      Abstract: Hesitant multiplicative preference relation (HMPR) is a useful tool to cope with the problems in which the experts utilize Saaty’s 1–9 scale to express their preference information over paired comparisons of alternatives. It is known that the lack of acceptable consistency easily leads to inconsistent conclusions, therefore consistency improvement processes and deriving the reliable priority weight vector for alternatives are two significant and challenging issues for hesitant multiplicative information decision-making problems. In this paper, some new concepts are first introduced, including HMPR, consistent HMPR and the consistency index of HMPR. Then, based on the logarithmic least squares model and linear optimization model, two novel automatic iterative algorithms are proposed to enhance the consistency of HMPR and generate the priority weights of HMPR, which are proved to be convergent. In the end, the proposed algorithms are applied to the factors affecting selection of fog-haze weather. The comparative analysis shows that the decision-making process in our algorithms would be more straight-forward and efficient.
      Citation: Algorithms
      PubDate: 2018-10-10
      DOI: 10.3390/a11100154
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 155: Real-Time Tumor Motion Tracking in 3D
           Using Planning 4D CT Images during Image-Guided Radiation Therapy

    • Authors: Jang-Hwan Choi, Sooyeul Lee
      First page: 155
      Abstract: In this paper we propose a novel method for tracking the respiratory phase and 3D tumor position in real time during treatment. The method uses planning four-dimensional (4D) computed tomography (CT) obtained through the respiratory phase, and a kV projection taken during treatment. First, digitally rendered radiographs (DRRs) are generated from the 4DCT, and the structural similarity (SSIM) between the DRRs and the kV projection is computed to determine the current respiratory phase and magnitude. The 3D position of the tumor corresponding to the phase and magnitude is estimated using non-rigid registration by utilizing the tumor path segmented in the 4DCT. This method is evaluated using data from six patients with lung cancer and dynamic diaphragm phantom data. The method performs well irrespective of the gantry angle used, i.e., a respiration phase tracking accuracy of 97.2 ± 2.5%, and tumor tracking error in 3D of 0.9 ± 0.4 mm. The phantom study reveals that the DRRs match the actual projections well. The time taken to track the tumor is 400 ± 53 ms. This study demonstrated the feasibility of a technique used to track the respiratory phase and 3D tumor position in real time using kV fluoroscopy acquired from arbitrary angles around the freely breathing patient.
      Citation: Algorithms
      PubDate: 2018-10-11
      DOI: 10.3390/a11100155
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 156: Online Uniformly Inserting Points on the

    • Authors: Rong Zhou, Chun Chen, Liqun Sun, Francis C. M. Lau, Sheung-Hung Poon, Yong Zhang
      First page: 156
      Abstract: Uniformly inserting points on the sphere has been found useful in many scientific and engineering fields. Different from the offline version where the number of points is known in advance, we consider the online version of this problem. The requests for point insertion arrive one by one and the target is to insert points as uniformly as possible. To measure the uniformity we use gap ratio which is defined as the ratio of the maximal gap to the minimal gap of two arbitrary inserted points. We propose a two-phase online insertion strategy with gap ratio of at most 3.69 . Moreover, the lower bound of the gap ratio is proved to be at least 1.78 .
      Citation: Algorithms
      PubDate: 2018-10-16
      DOI: 10.3390/a11100156
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 157: LSTM Accelerator for Convolutional Object

    • Authors: Alkiviadis Savvopoulos, Andreas Kanavos, Phivos Mylonas, Spyros Sioutas
      First page: 157
      Abstract: Deep Learning has dramatically advanced the state of the art in vision, speech and many other areas. Recently, numerous deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this paper, in order to detect the version that can provide the best trade-off in terms of time and accuracy, convolutional networks of various depths have been implemented. Batch normalization is also considered since it acts as a regularizer and achieves the same accuracy with fewer training steps. For maximizing the yield of the complexity by diminishing, as well as minimizing the loss of accuracy, LSTM neural net layers are utilized in the process. The image sequences are proven to be classified by the LSTM in a more accelerated manner, while managing better precision. Concretely, the more complex the CNN, the higher the percentages of exactitude; in addition, but for the high-rank increase in accuracy, the time was significantly decreased, which eventually rendered the trade-off optimal. The average improvement of performance for all models regarding both datasets used amounted to 42 % .
      Citation: Algorithms
      PubDate: 2018-10-17
      DOI: 10.3390/a11100157
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 158: Incremental Learning for Classification of
           Unstructured Data Using Extreme Learning Machine

    • Authors: Sathya Madhusudhanan, Suresh Jaganathan, Jayashree L S
      First page: 158
      Abstract: Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it becomes difficult to train and create a model from scratch each time. Incremental learning, a self-adaptive algorithm uses the previously learned model information, then learns and accommodates new information from the newly arrived data providing a new model, which avoids the retraining. The incrementally learned knowledge helps to classify the unstructured data. In this paper, we propose a framework CUIL (Classification of Unstructured data using Incremental Learning) which clusters the metadata, assigns a label for each cluster and then creates a model using Extreme Learning Machine (ELM), a feed-forward neural network, incrementally for each batch of data arrived. The proposed framework trains the batches separately, reducing the memory resources, training time significantly and is tested with metadata created for the standard image datasets like MNIST, STL-10, CIFAR-10, Caltech101, and Caltech256. Based on the tabulated results, our proposed work proves to show greater accuracy and efficiency.
      Citation: Algorithms
      PubDate: 2018-10-17
      DOI: 10.3390/a11100158
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 159: A Faster Algorithm for Reducing the
           Computational Complexity of Convolutional Neural Networks

    • Authors: Yulin Zhao, Donghui Wang, Leiou Wang, Peng Liu
      First page: 159
      Abstract: Convolutional neural networks have achieved remarkable improvements in image and video recognition but incur a heavy computational burden. To reduce the computational complexity of a convolutional neural network, this paper proposes an algorithm based on the Winograd minimal filtering algorithm and Strassen algorithm. Theoretical assessments of the proposed algorithm show that it can dramatically reduce computational complexity. Furthermore, the Visual Geometry Group (VGG) network is employed to evaluate the algorithm in practice. The results show that the proposed algorithm can provide the optimal performance by combining the savings of these two algorithms. It saves 75% of the runtime compared with the conventional algorithm.
      Citation: Algorithms
      PubDate: 2018-10-18
      DOI: 10.3390/a11100159
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 160: Modeling and Evaluation of Power-Aware
           Software Rejuvenation in Cloud Systems

    • Authors: Sharifeh Fakhrolmobasheri, Ehsan Ataie, Ali Movaghar
      First page: 160
      Abstract: Long and continuous running of software can cause software aging-induced errors and failures. Cloud data centers suffer from these kinds of failures when Virtual Machine Monitors (VMMs), which control the execution of Virtual Machines (VMs), age. Software rejuvenation is a proactive fault management technique that can prevent the occurrence of future failures by terminating VMMs, cleaning up their internal states, and restarting them. However, the appropriate time and type of VMM rejuvenation can affect performance, availability, and power consumption of a system. In this paper, an analytical model is proposed based on Stochastic Activity Networks for performance evaluation of Infrastructure-as-a-Service cloud systems. Using the proposed model, a two-threshold power-aware software rejuvenation scheme is presented. Many details of real cloud systems, such as VM multiplexing, migration of VMs between VMMs, VM heterogeneity, failure of VMMs, failure of VM migration, and different probabilities for arrival of different VM request types are investigated using the proposed model. The performance of the proposed rejuvenation scheme is compared with two baselines based on diverse performance, availability, and power consumption measures defined on the system.
      Citation: Algorithms
      PubDate: 2018-10-18
      DOI: 10.3390/a11100160
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 161: Total Coloring Conjecture for Certain
           Classes of Graphs

    • Authors: R. Vignesh, J. Geetha, K. Somasundaram
      First page: 161
      Abstract: A total coloring of a graph G is an assignment of colors to the elements of the graph G such that no two adjacent or incident elements receive the same color. The total chromatic number of a graph G, denoted by χ ′ ′ ( G ) , is the minimum number of colors that suffice in a total coloring. Behzad and Vizing conjectured that for any graph G, Δ ( G ) + 1 ≤ χ ′ ′ ( G ) ≤ Δ ( G ) + 2 , where Δ ( G ) is the maximum degree of G. In this paper, we prove the total coloring conjecture for certain classes of graphs of deleted lexicographic product, line graph and double graph.
      Citation: Algorithms
      PubDate: 2018-10-19
      DOI: 10.3390/a11100161
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 162: Application of Data Science Technology on
           Research of Circulatory System Disease Prediction Based on a Prospective

    • Authors: Haijing Tang, Guo Chen, Yu Kang, Xu Yang
      First page: 162
      Abstract: Chronic diseases represented by circulatory diseases have gradually become the main types of diseases affecting the health of our population. Establishing a circulatory system disease prediction model to predict the occurrence of diseases and controlling them is of great significance to the health of our population. This article is based on the prospective population cohort data of chronic diseases in China, based on the existing medical cohort studies, the Kaplan–Meier method was used for feature selection, and the traditional medical analysis model represented by the Cox proportional hazards model was used and introduced. Support vector machine research methods in machine learning establish circulatory system disease prediction models. This paper also attempts to introduce the proportion of the explanation variation (PEV) and the shrinkage factor to improve the Cox proportional hazards model; and the use of Particle Swarm Optimization (PSO) algorithm to optimize the parameters of SVM model. Finally, the experimental verification of the above prediction models is carried out. This paper uses the model training time, Accuracy rate(ACC), the area under curve (AUC)of the Receiver Operator Characteristic curve (ROC) and other forecasting indicators. The experimental results show that the PSO-SVM-CSDPC disease prediction model and the S-Cox-CSDPC circulation system disease prediction model have the advantages of fast model solving speed, accurate prediction results and strong generalization ability, which are helpful for the intervention and control of chronic diseases.
      Citation: Algorithms
      PubDate: 2018-10-20
      DOI: 10.3390/a11100162
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 163: Airfoil Optimization Design Based on the
           Pivot Element Weighting Iterative Method

    • Authors: Xinqiang Liu, Weiliang He
      First page: 163
      Abstract: Class function/shape function transformation (CST) is an advanced geometry representation method employed to generate airfoil coordinates. Aiming at the morbidity of the CST coefficient matrix, the pivot element weighting iterative (PEWI) method is proposed to improve the condition number of the ill-conditioned matrix in the CST. The feasibility of the PEWI method is evaluated by using the RAE2822 and S1223 airfoil. The aerodynamic optimization of the S1223 airfoil is conducted based on the Isight software platform. First, the S1223 airfoil is parameterized by the CST with the PEWI method. It is very significant to confirm the range of variables for the airfoil optimization design. So the normalization method of design variables is put forward in the paper. Optimal Latin Hypercube sampling is applied to generate the samples, whose aerodynamic performances are calculated by the numerical simulation. Then the Radial Basis Functions (RBF) neural network model is trained by these aerodynamic performance data. Finally, the multi-island genetic algorithm is performed to achieve the maximum lift-drag ratio of S1223. The results show that the robustness of the CST can be improved. Moreover, the lift-drag ratio of S1223 increases by 2.27% and the drag coefficient decreases by 1.4%.
      Citation: Algorithms
      PubDate: 2018-10-22
      DOI: 10.3390/a11100163
      Issue No: Vol. 11, No. 10 (2018)
  • Algorithms, Vol. 11, Pages 164: Learning Representations of Natural
           Language Texts with Generative Adversarial Networks at Document, Sentence,
           and Aspect Level

    • Authors: Aggeliki Vlachostergiou, George Caridakis, Phivos Mylonas, Andreas Stafylopatis
      First page: 164
      Abstract: The ability to learn robust, resizable feature representations from unlabeled data has potential applications in a wide variety of machine learning tasks. One way to create such representations is to train deep generative models that can learn to capture the complex distribution of real-world data. Generative adversarial network (GAN) approaches have shown impressive results in producing generative models of images, but relatively little work has been done on evaluating the performance of these methods for the learning representation of natural language, both in supervised and unsupervised settings at the document, sentence, and aspect level. Extensive research validation experiments were performed by leveraging the 20 Newsgroups corpus, the Movie Review (MR) Dataset, and the Finegrained Sentiment Dataset (FSD). Our experimental analysis suggests that GANs can successfully learn representations of natural language texts at all three aforementioned levels.
      Citation: Algorithms
      PubDate: 2018-10-22
      DOI: 10.3390/a11100164
      Issue No: Vol. 11, No. 10 (2018)
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-