for Journals by Title or ISSN
for Articles by Keywords
help
  Subjects -> COMPUTER SCIENCE (Total: 2063 journals)
    - ANIMATION AND SIMULATION (31 journals)
    - ARTIFICIAL INTELLIGENCE (101 journals)
    - AUTOMATION AND ROBOTICS (105 journals)
    - CLOUD COMPUTING AND NETWORKS (64 journals)
    - COMPUTER ARCHITECTURE (10 journals)
    - COMPUTER ENGINEERING (11 journals)
    - COMPUTER GAMES (16 journals)
    - COMPUTER PROGRAMMING (26 journals)
    - COMPUTER SCIENCE (1196 journals)
    - COMPUTER SECURITY (46 journals)
    - DATA BASE MANAGEMENT (14 journals)
    - DATA MINING (35 journals)
    - E-BUSINESS (22 journals)
    - E-LEARNING (29 journals)
    - ELECTRONIC DATA PROCESSING (23 journals)
    - IMAGE AND VIDEO PROCESSING (39 journals)
    - INFORMATION SYSTEMS (110 journals)
    - INTERNET (93 journals)
    - SOCIAL WEB (51 journals)
    - SOFTWARE (33 journals)
    - THEORY OF COMPUTING (8 journals)

COMPUTER SCIENCE (1196 journals)                  1 2 3 4 5 6 | Last

Showing 1 - 200 of 872 Journals sorted alphabetically
3D Printing and Additive Manufacturing     Full-text available via subscription   (Followers: 20)
Abakós     Open Access   (Followers: 4)
ACM Computing Surveys     Hybrid Journal   (Followers: 27)
ACM Journal on Computing and Cultural Heritage     Hybrid Journal   (Followers: 8)
ACM Journal on Emerging Technologies in Computing Systems     Hybrid Journal   (Followers: 12)
ACM Transactions on Accessible Computing (TACCESS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 15)
ACM Transactions on Applied Perception (TAP)     Hybrid Journal   (Followers: 5)
ACM Transactions on Architecture and Code Optimization (TACO)     Hybrid Journal   (Followers: 9)
ACM Transactions on Autonomous and Adaptive Systems (TAAS)     Hybrid Journal   (Followers: 7)
ACM Transactions on Computation Theory (TOCT)     Hybrid Journal   (Followers: 12)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 3)
ACM Transactions on Computer Systems (TOCS)     Hybrid Journal   (Followers: 18)
ACM Transactions on Computer-Human Interaction     Hybrid Journal   (Followers: 15)
ACM Transactions on Computing Education (TOCE)     Hybrid Journal   (Followers: 5)
ACM Transactions on Design Automation of Electronic Systems (TODAES)     Hybrid Journal   (Followers: 4)
ACM Transactions on Economics and Computation     Hybrid Journal  
ACM Transactions on Embedded Computing Systems (TECS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Information Systems (TOIS)     Hybrid Journal   (Followers: 19)
ACM Transactions on Intelligent Systems and Technology (TIST)     Hybrid Journal   (Followers: 8)
ACM Transactions on Interactive Intelligent Systems (TiiS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Reconfigurable Technology and Systems (TRETS)     Hybrid Journal   (Followers: 6)
ACM Transactions on Sensor Networks (TOSN)     Hybrid Journal   (Followers: 8)
ACM Transactions on Speech and Language Processing (TSLP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Storage     Hybrid Journal  
ACS Applied Materials & Interfaces     Full-text available via subscription   (Followers: 29)
Acta Automatica Sinica     Full-text available via subscription   (Followers: 2)
Acta Informatica Malaysia     Open Access  
Acta Universitatis Cibiniensis. Technical Series     Open Access  
Ad Hoc Networks     Hybrid Journal   (Followers: 11)
Adaptive Behavior     Hybrid Journal   (Followers: 11)
Advanced Engineering Materials     Hybrid Journal   (Followers: 28)
Advanced Science Letters     Full-text available via subscription   (Followers: 10)
Advances in Adaptive Data Analysis     Hybrid Journal   (Followers: 7)
Advances in Artificial Intelligence     Open Access   (Followers: 15)
Advances in Calculus of Variations     Hybrid Journal   (Followers: 2)
Advances in Catalysis     Full-text available via subscription   (Followers: 5)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 19)
Advances in Computer Engineering     Open Access   (Followers: 4)
Advances in Computing     Open Access   (Followers: 2)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 51)
Advances in Engineering Software     Hybrid Journal   (Followers: 27)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 13)
Advances in Human Factors/Ergonomics     Full-text available via subscription   (Followers: 21)
Advances in Human-Computer Interaction     Open Access   (Followers: 20)
Advances in Materials Sciences     Open Access   (Followers: 14)
Advances in Operations Research     Open Access   (Followers: 12)
Advances in Parallel Computing     Full-text available via subscription   (Followers: 7)
Advances in Porous Media     Full-text available via subscription   (Followers: 5)
Advances in Remote Sensing     Open Access   (Followers: 44)
Advances in Science and Research (ASR)     Open Access   (Followers: 6)
Advances in Technology Innovation     Open Access   (Followers: 5)
AEU - International Journal of Electronics and Communications     Hybrid Journal   (Followers: 8)
African Journal of Information and Communication     Open Access   (Followers: 9)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 4)
AI EDAM     Hybrid Journal  
Air, Soil & Water Research     Open Access   (Followers: 11)
AIS Transactions on Human-Computer Interaction     Open Access   (Followers: 6)
Algebras and Representation Theory     Hybrid Journal   (Followers: 1)
Algorithms     Open Access   (Followers: 11)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 5)
American Journal of Computational Mathematics     Open Access   (Followers: 4)
American Journal of Information Systems     Open Access   (Followers: 5)
American Journal of Sensor Technology     Open Access   (Followers: 4)
Anais da Academia Brasileira de Ciências     Open Access   (Followers: 2)
Analog Integrated Circuits and Signal Processing     Hybrid Journal   (Followers: 7)
Analysis in Theory and Applications     Hybrid Journal   (Followers: 1)
Animation Practice, Process & Production     Hybrid Journal   (Followers: 5)
Annals of Combinatorics     Hybrid Journal   (Followers: 4)
Annals of Data Science     Hybrid Journal   (Followers: 12)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 12)
Annals of Pure and Applied Logic     Open Access   (Followers: 2)
Annals of Software Engineering     Hybrid Journal   (Followers: 13)
Annual Reviews in Control     Hybrid Journal   (Followers: 6)
Anuario Americanista Europeo     Open Access  
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 2)
Applied and Computational Harmonic Analysis     Full-text available via subscription   (Followers: 1)
Applied Artificial Intelligence: An International Journal     Hybrid Journal   (Followers: 12)
Applied Categorical Structures     Hybrid Journal   (Followers: 2)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 11)
Applied Computer Systems     Open Access   (Followers: 2)
Applied Informatics     Open Access  
Applied Mathematics and Computation     Hybrid Journal   (Followers: 34)
Applied Medical Informatics     Open Access   (Followers: 10)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 5)
Applied Soft Computing     Hybrid Journal   (Followers: 16)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 5)
Applied System Innovation     Open Access  
Architectural Theory Review     Hybrid Journal   (Followers: 3)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 5)
Archive of Numerical Software     Open Access  
Archives and Museum Informatics     Hybrid Journal   (Followers: 143)
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 5)
arq: Architectural Research Quarterly     Hybrid Journal   (Followers: 7)
Artifact     Hybrid Journal   (Followers: 2)
Artificial Life     Hybrid Journal   (Followers: 7)
Asia Pacific Journal on Computational Engineering     Open Access  
Asia-Pacific Journal of Information Technology and Multimedia     Open Access   (Followers: 1)
Asian Journal of Computer Science and Information Technology     Open Access  
Asian Journal of Control     Hybrid Journal  
Assembly Automation     Hybrid Journal   (Followers: 2)
at - Automatisierungstechnik     Hybrid Journal   (Followers: 1)
Australian Educational Computing     Open Access   (Followers: 1)
Automatic Control and Computer Sciences     Hybrid Journal   (Followers: 4)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Automatica     Hybrid Journal   (Followers: 11)
Automation in Construction     Hybrid Journal   (Followers: 6)
Autonomous Mental Development, IEEE Transactions on     Hybrid Journal   (Followers: 9)
Basin Research     Hybrid Journal   (Followers: 5)
Behaviour & Information Technology     Hybrid Journal   (Followers: 53)
Big Data and Cognitive Computing     Open Access   (Followers: 2)
Biodiversity Information Science and Standards     Open Access  
Bioinformatics     Hybrid Journal   (Followers: 294)
Biomedical Engineering     Hybrid Journal   (Followers: 15)
Biomedical Engineering and Computational Biology     Open Access   (Followers: 13)
Biomedical Engineering, IEEE Reviews in     Full-text available via subscription   (Followers: 21)
Biomedical Engineering, IEEE Transactions on     Hybrid Journal   (Followers: 37)
Briefings in Bioinformatics     Hybrid Journal   (Followers: 47)
British Journal of Educational Technology     Hybrid Journal   (Followers: 138)
Broadcasting, IEEE Transactions on     Hybrid Journal   (Followers: 12)
c't Magazin fuer Computertechnik     Full-text available via subscription   (Followers: 1)
CALCOLO     Hybrid Journal  
Calphad     Hybrid Journal   (Followers: 2)
Canadian Journal of Electrical and Computer Engineering     Full-text available via subscription   (Followers: 15)
Capturing Intelligence     Full-text available via subscription  
Catalysis in Industry     Hybrid Journal   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 2)
Cell Communication and Signaling     Open Access   (Followers: 2)
Central European Journal of Computer Science     Hybrid Journal   (Followers: 5)
CERN IdeaSquare Journal of Experimental Innovation     Open Access   (Followers: 3)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 3)
Chemometrics and Intelligent Laboratory Systems     Hybrid Journal   (Followers: 14)
ChemSusChem     Hybrid Journal   (Followers: 7)
China Communications     Full-text available via subscription   (Followers: 7)
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
CIN Computers Informatics Nursing     Full-text available via subscription   (Followers: 11)
Circuits and Systems     Open Access   (Followers: 15)
Clean Air Journal     Full-text available via subscription   (Followers: 1)
CLEI Electronic Journal     Open Access  
Clin-Alert     Hybrid Journal   (Followers: 1)
Cluster Computing     Hybrid Journal   (Followers: 1)
Cognitive Computation     Hybrid Journal   (Followers: 4)
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 4)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 14)
Communication Methods and Measures     Hybrid Journal   (Followers: 12)
Communication Theory     Hybrid Journal   (Followers: 21)
Communications Engineer     Hybrid Journal   (Followers: 1)
Communications in Algebra     Hybrid Journal   (Followers: 3)
Communications in Computational Physics     Full-text available via subscription   (Followers: 2)
Communications in Partial Differential Equations     Hybrid Journal   (Followers: 3)
Communications of the ACM     Full-text available via subscription   (Followers: 52)
Communications of the Association for Information Systems     Open Access   (Followers: 16)
COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering     Hybrid Journal   (Followers: 3)
Complex & Intelligent Systems     Open Access   (Followers: 1)
Complex Adaptive Systems Modeling     Open Access  
Complex Analysis and Operator Theory     Hybrid Journal   (Followers: 2)
Complexity     Hybrid Journal   (Followers: 6)
Complexus     Full-text available via subscription  
Composite Materials Series     Full-text available via subscription   (Followers: 8)
Computación y Sistemas     Open Access  
Computation     Open Access   (Followers: 1)
Computational and Applied Mathematics     Hybrid Journal   (Followers: 2)
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 2)
Computational and Structural Biotechnology Journal     Open Access   (Followers: 2)
Computational and Theoretical Chemistry     Hybrid Journal   (Followers: 9)
Computational Astrophysics and Cosmology     Open Access   (Followers: 1)
Computational Biology and Chemistry     Hybrid Journal   (Followers: 12)
Computational Chemistry     Open Access   (Followers: 2)
Computational Cognitive Science     Open Access   (Followers: 2)
Computational Complexity     Hybrid Journal   (Followers: 4)
Computational Condensed Matter     Open Access  
Computational Ecology and Software     Open Access   (Followers: 9)
Computational Economics     Hybrid Journal   (Followers: 9)
Computational Geosciences     Hybrid Journal   (Followers: 16)
Computational Linguistics     Open Access   (Followers: 23)
Computational Management Science     Hybrid Journal  
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 8)
Computational Mechanics     Hybrid Journal   (Followers: 5)
Computational Methods and Function Theory     Hybrid Journal  
Computational Molecular Bioscience     Open Access   (Followers: 2)
Computational Optimization and Applications     Hybrid Journal   (Followers: 7)
Computational Particle Mechanics     Hybrid Journal   (Followers: 1)
Computational Research     Open Access   (Followers: 1)
Computational Science and Discovery     Full-text available via subscription   (Followers: 2)
Computational Science and Techniques     Open Access  
Computational Statistics     Hybrid Journal   (Followers: 14)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 30)
Computer     Full-text available via subscription   (Followers: 95)
Computer Aided Surgery     Open Access   (Followers: 6)
Computer Applications in Engineering Education     Hybrid Journal   (Followers: 8)
Computer Communications     Hybrid Journal   (Followers: 16)
Computer Journal     Hybrid Journal   (Followers: 9)
Computer Methods in Applied Mechanics and Engineering     Hybrid Journal   (Followers: 23)
Computer Methods in Biomechanics and Biomedical Engineering     Hybrid Journal   (Followers: 12)
Computer Methods in the Geosciences     Full-text available via subscription   (Followers: 2)
Computer Music Journal     Hybrid Journal   (Followers: 19)
Computer Physics Communications     Hybrid Journal   (Followers: 7)

        1 2 3 4 5 6 | Last

Journal Cover
Applied Soft Computing
Journal Prestige (SJR): 1.199
Citation Impact (citeScore): 5
Number of Followers: 16  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1568-4946
Published by Elsevier Homepage  [3162 journals]
  • A 2-dimension uncertain linguistic DEMATEL method for identifying critical
           success factors in emergency management
    • Abstract: Publication date: Available online 11 July 2018Source: Applied Soft ComputingAuthor(s): Xue-Feng Ding, Hu-Chen Liu Due to the variety and destructiveness of unconventional emergencies, it is of great necessity to analyze and optimize emergency management systematically. The performance of emergency management is often influenced by a lot of factors, and it is not realistic to improve all influential factors simultaneously due to limited resources. To address this concern, a new combined method based on 2-dimension uncertain linguistic variables (2DULVs) and decision-making trial and evaluation laboratory (DEMATEL) is proposed in this paper to determine critical success factors (CSFs) in emergency management. Specifically, the evaluations of influential factors given by decision makers are represented and processed by using 2DULVs. Based on the DEMATEL technique, the cause-effect classification of factors are obtained and the CSFs in emergency management can be found out. Finally, a case study is provided to illustrate the practicality and usefulness of the proposed 2-dimension uncertain linguistic DEMATEL (2DUL-DEMATEL) method. The results show that by enhancing the performance of five CSFs, the effectiveness and efficiency of emergency management could be promoted greatly.
       
  • MTEEGC: A Novel Approach for Multi-Trial EEG Clustering
    • Abstract: Publication date: Available online 11 July 2018Source: Applied Soft ComputingAuthor(s): Chenglong Dai, Dechang Pi, Lin Cui, Yanlong Zhu This paper explores multi-trial EEG (Electroencephalography signal) clustering and proposes a novel centroid-based approach for it. It firstly utilizes an improved cross correlation to measure similarities of multi-trial EEGs and then proposes an optimal EEG feature extraction to seek cluster centroids based on the improved cross correlation similarities. Finally, it leads to a novel algorithm called MTEEGC for multi-trial EEG clustering. MTEEGC yields high-quality multi-trial EEG clustering with respect to the intra-cluster compactness as well as the inter-cluster scatter. Meanwhile, it also demonstrates the superiority of MTEEGC in clustering accuracy over 10 state-of-the-art time series clustering algorithms through a detailed experimentation using standard cluster validity criteria on 5 real-world multi-trial EEG datasets. Especially, compared with the worst and the best algorithms in the 10 baseline algorithms, MTEEGC respectively achieves 36.11% and 2.53% mean improvements with clustering accuracy (i.e., RI) on 5 multi-trial EEG datasets.
       
  • A Novel Phase I Fuzzy Profile Monitoring Approach based on Fuzzy Change
           Point Analysis
    • Abstract: Publication date: Available online 11 July 2018Source: Applied Soft ComputingAuthor(s): Ghazale Moghadam, Gholam Ali Raissi Ardali, Vahid Amirzadeh In many applications of quality control, the quality of a product or service is described by its profile, which is a relationship between a response variable and one or more independent variables. However, in real world applications, vagueness, imprecision and uncertainty in data is inevitable and hence profile monitoring of fuzzy data is an important issue. In this paper, we discuss the phase I of fuzzy profile monitoring, when the response variables are fuzzy and vague, and propose a new method for estimating the change point. The proposed method, called fuzzy change point technique, is based on the principle of Maximum Likelihood Estimator (MLE) with fuzzy observations. The performance of the proposed method is evaluated by its ability to satisfy the goals of phase I fuzzy profile monitoring and is based on “the probability of an out-of-control signal”, and the accuracy of the change point estimator. Simulation results show that this method outperforms methods known to date. Besides, we are not aware of any other method that is able to determine the real time of change in a process. The applicability of the proposed method is demonstrated by a case study in ceramic tile industry.
       
  • A novel optimal replication allocation strategy for particle swarm
           optimization algorithms applied to simulation optimization problem
    • Abstract: Publication date: Available online 11 July 2018Source: Applied Soft ComputingAuthor(s): Chun-Chih Chiu, James T. Lin This study proposes a simulation optimization algorithm, called optimal replication allocation strategy (ORAS), for particle swarm optimization (PSO) algorithms in simulation models with high computing cost in large design spaces. In this study, the PSO is employed to explore and exploit near-optimal or optimal solution in the design space. Given the uncertainties in real-word applications, a simulation model is constructed to evaluate the performance of each design alternative. Optimal computing budget allocation (OCBA), a state-of-the-art resampling method, is combined with metaheuristic principles to improve the accuracy of estimating best solutions and enhancing efficiency by intelligently allocating the number of replications. However, the solution space is nonlinear or multimodal; that is, various local or global solutions exist. In OCBA, the probability of correct selection (P(CS)) in the currect best solution serves as an important measurement. P(CS) refers to the probability that the “best” of k populations is selected according to some specified criteria of best. OCBA can halt allocation when P(CS) reaches higher than the desired value, i.e., P(CS)*. The multimodal solution space features a high probability of reaching a low P(CS) as many solutions perform closely to the current best of each generation. This situation indicates that P(CS) cannot achieve P(CS)*; OCBA cannot stop allocation, and additional computational cost may be wasted. In this study, we redefine and modify P(CS). The new version is P(CSE), which considers calculation of global best, called the super individual, instead of current best solution within a confidence level. The proposed ORAS can provide an asymptotically optimal allocation rule for combining with population metaheuristics based on P(CSE). We apply ORAS using an original particle swarm optimization (PSO) and two variants of PSO to address stochastic buffer allocation problem and stochastic function optimization problem compared with several state-of-the-art technologies from literature. The resulting ORAS with three PSOs is an effective procedure as it intelligently utilizes limited computing resources. Numerical tests indicate that ORAS increases P(CSE) in each generation and subsequently enhances the efficiency of PSO algorithms.
       
  • LFNN: Lion fuzzy neural network-based evolutionary model for text
           classification using context and sense based features
    • Abstract: Publication date: Available online 11 July 2018Source: Applied Soft ComputingAuthor(s): Nihar M. Ranjan, Rajesh S. Prasad Text classification is one of the popular techniques of text mining that labels the documents based on a set of topics defined according to the requirements. Among various approaches used for text categorization, incremental learning techniques are important due to its widespread applications. This paper presents a connectionist classification approach using context-semantic features and LFNN-based incremental learning algorithm for the text classification. The proposed technique considers a dynamic database for the classification so that the classifier can learn the model dynamically. This incremental learning process adopts Back Propagation Lion (BPLion) Neural Network, where it includes fuzzy bounding and Lion Algorithm (LA), for the feasible selection of weights. The effectiveness of the proposed method is analyzed by comparing it with the existing techniques, I-BP, FI-BP, and I-BPLion regarding accuracy and error, in a comparative analysis. As a result of the comparison, classification accuracies of 81.49%, 83.41%, 88.76%, and 95%; and minimum error values of 8.11, 7.49, 3.02, and 4.92 are possible to attain in LFNN, for 20 Newsgroup, Reuter datasets, WebKB, and RCV1 respectively.Graphical abstractGraphical abstract for this article
       
  • Multi-objective optimal design of submerged arches using extreme learning
           machine and evolutionary algorithms
    • Abstract: Publication date: Available online 10 July 2018Source: Applied Soft ComputingAuthor(s): Alejandro M. Hernández-Díaz, Andrés Bueno-Crespo, Jorge Pérez-Aracil, José M. Cecilia The design of funicular (or momentless) submerged arches has a great application in the fields of the building construction and the civil engineering. Traditional approaches in this field have been based on the resolution of ordinary differential equations that govern the structural behavior of the submerged arches. Indeed, these approaches only consider a design parameter and they are computationally expensive. For intermediate depth ratios, the funicular shape of the arch lays about halfway between the geometric forms of the parabola and the ellipse. Actually, the arch centerline could be modeled as a parametric linear function of the these two conical shapes where different parameters are established, opening new opportunities for the optimization in the design of such structures, which also consider several design parameters. In this article, we propose a methodology to optimize several parameters in the design of submerged arches. Specifically, we focus on the reduction of the arch bending moment, which is a critical factor in the design cost of the structure, and also the maximization of the airspace enclosed by the arch, which is of particular interest in the serviceability of recreational submerged installations. Our methodology is based on a multi-objective evolutionary algorithm, which uses artificial neural networks with extreme learning machine (ELM) to predict the level of bending stresses at the submerged arch under different shape configurations and also reduce the overall computational cost. Two groups of test examples, corresponding to deep and shallow waters, are developed to compare the numerical results obtained by multi-objective optimization with the theoretical curves predicted by the traditional funicular analysis. Our experimental results offer good accuracy (R2 up to 93%) in the fitness evaluation using ELM. After the multi-objective optimization procedure, our results show optimal arch-shapes with minimum bending stress (i.e., minimum cost) and maximum airspace; thus, the functionality of the underwater installation is also optimal.
       
  • A Decomposition-based Kernel of Mallows Models Algorithm for bi- and
           tri-objective Permutation Flowshop Scheduling Problem
    • Abstract: Publication date: Available online 10 July 2018Source: Applied Soft ComputingAuthor(s): Murilo Zangari, Ademir Aparecido Constantino, Josu Ceberio In the last decades, the permutation flowshop scheduling problem (PFSP) has been thoroughly studied in combinatorial optimization and scheduling research. The most common objectives for this problem are makespan, total flowtime, and total tardiness. Furthermore, most production scheduling problems naturally involve multiples conflicting objectives to be optimized. Over the years, several approaches have been proposed to solve multi-objective PFSP. In the multi-objective optimization field of research, the multi-objective evolutionary algorithms based on decomposition has been recognized as one of the main strategies for solving multi-objective optimization problems. In this paper, we present a MOEA that integrates the decomposition approach and a probability model for ranking data (particularly, the Mallows model) for solving PFSP minimizing two and three objectives. We have conducted an extensive experimental study using a well-known benchmark set. In the bi-objective case, the proposed algorithm achieves competitive results compared to the best-known approximated Pareto fronts from the literature. In the tri-objective scenario, our algorithm achieves the best results in comparison to some state-of-the-art decomposition-based algorithms.
       
  • NeSS: A Modified Artificial Bee Colony Approach Based on Nest Site
           Selection Behavior
    • Abstract: Publication date: Available online 10 July 2018Source: Applied Soft ComputingAuthor(s): Unchalisa Taetragool, Booncharoen Sirinaovakul, Tiranee Achalakul This paper proposes a new version of the Artificial Bee Colony (ABC) optimization algorithm based on the natural nest-site selection behavior of honey bee swarms. The algorithm offers three advantages. First, the responsibility of bees in the proposed algorithm is dynamic, meaning that bee agents may change states (explorer, resting, observer, and committed) during the execution. New feasible solutions are randomly generated in every iteration by varying the numbers of bee agents, thus introducing diversified candidate solutions into the solution space. Second, each bee records its own satisfaction with each nest-site and this satisfaction value is added to the quality measure for the nest site, allowing lower quality nest sites to be retained as candidate solutions and again providing greater diversity. Third, a quorum mechanism provides the stopping criterion, instead of the maximum cycle number (MCN). This mechanism helps the algorithm to converge more quickly. Experiments on the traveling salesman and clustering problems have been conducted in order to observe the algorithm performance. The results show promising performance in solving combinatorial problems when compared with other state-of-the-art algorithms. In addition, the proposed algorithm can converge more quickly with the quorum mechanism.Graphical abstractGraphical abstract for this article
       
  • Stationary Mahalanobis Kernel SVM for Credit Risk Evaluation
    • Abstract: Publication date: Available online 10 July 2018Source: Applied Soft ComputingAuthor(s): Hao Jiang, Wai-Ki Ching, Ka Fai Cedric Yiu, Yushan Qiu This paper proposed Mahalanobis distance induced kernels in Support Vector Machines (SVMs) with applications in credit risk evaluation. We take a particular interest in stationary ones. Compared to traditional stationary kernels, Mahalanobis kernels take into account on feature's correlation and can provide a more suitable description on the behavior of the data sets. Results on real world credit data sets show that stationary kernels with Mahalanobis distance outperform the stationary kernels with various distance measures and they can also compete with frequently used kernels in SVM. The superior performance of our proposed kernels over other classical machine learning methods and the successful application of the kernels in large scale credit risk evaluation problems may imply that we have proposed a new class of kernels appropriate for credit risk evaluations.
       
  • A recursive PSO scheme for gene selection in microarray data
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Yamuna Prasad, K.K. Biswas, M. Hanmandlu In DNA microarray datasets, the number of genes are very large, typically in thousands while the number of samples are in hundreds. This raises the issue of generalization in the classification process. Gene selection plays a significant role in improving the accuracy. In this paper, we have proposed a recursive particle swarm optimization approach (PSO) for gene selection. The proposed method refines the feature (gene) space from a very coarse level to a fine-grained one at each recursive step of the algorithm without degrading the accuracy. In addition, we have integrated various filter based ranking methods with the proposed recursive PSO approach. We also propose to use linear support vector machine weight vector to serve as initial gene pool selection. We evaluate our method on five publicly available benchmark microarray datasets. Our approach selects only a small number of genes while yielding substantial improvements in accuracy over state-of-the-art evolutionary methods.Graphical abstractGraphical abstract for this article
       
  • A clustering based methodology to support the translation of medical
           specifications to software models
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Francesco Gargiulo, Stefano Silvestri, Mario Ciampi In this paper we propose a methodology to reduce the complexity to realize a software validation model, starting from medical specifications written in Italian natural language text. In order to obtain an automatic validation system it is necessary to manually translate the specification documents into software models. This task is long, tedious and error prone, due to the manual effort needed. To speed up this process and to reduce the errors that can occur, an important boost can be obtained from the grouping of the conformance rules belonging to the same pattern. Clustering algorithms can accomplish this task, but there is the need to know a priori the total cluster number, and this is not possible in this kind of problem. At this aim, we propose two innovative automatic cluster selection methodologies able to evaluate the optimal number of clusters, based on an iterative internal cluster measure evaluation. These approaches consider three different Vector Space Models (VSMs), two different clustering algorithms and the impact of the using the Principal Component Analysis technique. The experimental assessment has been performed on four different datasets extracted from the HL7 CDA R2 Italian language conformance rules specification documents, demonstrating the effectiveness of the proposed methodology. Finally, in order to compare the results of all possible configurations, we realized a non-parametric statistical analysis. The obtained results demonstrated the effectiveness of the proposed methodology for automatic cluster number selection.Graphical abstractGraphical abstract for this article
       
  • Probability-directed random search algorithm for unconstrained
           optimization problem
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Muhammed Jassem Al-Muhammed, Raed Abu Zitar Devising ways for handling problem optimization is an important yet a challenging task. The aims are always for methods that can effectively and quickly discover the global optimum for rather complicated mathematical functions that model real-world settings. Typically these functions are too difficult to discover their global optima because they may (1) lack the continuity and differentiability, (2) have multiple local optima, and (3) have complex expressions. In this paper, we address this challenge by offering an algorithm that combines the random search techniques with both an effective mapping and a dynamic adjustment of its search behavior. Our proposed algorithm automatically builds two types of triangular search directives over the unity intervals: principal and marginal. These search directives guide the search within both the effective regions of the search domain that most likely contain the optimum and the marginal regions of the search domain that less likely contain the optimum. During the search the algorithm monitors the intermediate search results and dynamically adjusts the directives' parameters to quickly move the search towards the optimum. Experiments with our prototype implementation showed that our method can effectively find the global optima for rather complicated mathematical functions chosen from well-known benchmarks, and performed better than other algorithms.
       
  • A study of automatic separation and recognition for overlapped
           fingerprints
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Hsieh-Chang Huang, Ching-Tang Hsieh, Min-Nan Hsiao, Cheng-Hsiang Yeh This paper presents an improved fingerprint recognition system for overlapping fingerprint of two fingers. The developed system can automatically and precisely separate an overlapping fingerprint into two areas, an overlapping area and a non-overlapping area, by analyzing their orientation fields and complicities. A method being able to remove a misjudging core of the fingerprint from the overlapping area is also proposed in this paper. Moreover, a recursive correction algorithm and a restrictions relaxation labeling algorithm are applied to separate two fingerprints from the determined overlapping fingerprint area, and the Gabor filter is applied to enhance figure quality of the two separated fingerprints. After that, the VeriFinger 6.2 SDK is used to identify the fingerprints. In the experiments, our system uses the fingerprints from the two public databases. Receiver Operating Characteristic (ROC) and Cumulative Match Characteristic (CMC), are used to examine the proposed fingerprint recognition system. The average recognition rate of our system is 85.7%. The experiments results show the effectiveness of the proposed system.
       
  • Autonomous task allocation for multi-UAV systems based on the locust
           elastic behavior
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Heba A. Kurdi, Ebtesam Aloboud, Maram Alalwan, Sarah Alhassan, Ebtehal Alotaibi, Guillermo Bautista, Jonathan P. How Task allocation is a grand challenge facing researches and practitioners in multiple unmanned aerial vehicles (multi-UAVs) missions. This paper proposes a new autonomous bio-inspired approach for efficiently allocating tasks among multiple UAVs during a mission. Task assignments are dynamically adjusted by each UAV on the basis of criteria related to the individual UAV operational status and mission parameters, without direct communication between the UAVs actively taking part in the mission. The proposed approach was inspired by the nature of locust species and their autonomous and elastic behavior in response to inside and outside impetus. Four long-standing multi-UAVs task allocation paradigms, including the auction-based, max-sum, ant colony optimization and opportunistic coordination schemes were used to benchmark the performance of the proposed approach. Experimental results demonstrated that the new approach substantially improves the net throughput and the mean task completion time while maintaining a linear running time when compared to all benchmarks under different scales of fleet size and number of tasks, demonstrating better scalability and sustainability.
       
  • Hybrid artificial algae algorithm for economic load dispatch
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Mohit Kumar, J.S. Dhillon A hybrid artificial algae algorithm (HAAA) has been proposed in this paper that hybridizes the artificial algae algorithm (AAA) and simplex search method (SSM) to solve economic load dispatch problem. The AAA simulates the life cycle of microalgae which comprises helical movement, evolutionary and adaptation phases. The helical movement phase provides the exploration while the evolutionary and adaptation phases provide the exploitation of search space. The exploration provided by the helical movement completely depends upon energy and shear-force parameters. In the proposed algorithm, the AAA acts as global optimizer while SSM provides local search. The SSM improves the exploitation capability by performing a local search. Dynamic tuning of parameters enhances the exploration capability of the proposed method. An iterative heuristic repair algorithm is applied to handle the equality constraint of economic load dispatch problem. Further, the operation in prohibited operating zones is avoided heuristically. The performance of HAAA is evaluated with CEC’05 standard test functions and small, medium and large-scale economic load dispatch problems. The simulation results advocate that the proposed algorithm produces better or comparable results than the well-established and popular algorithms. Finally, the Wilcoxon sign rank test is performed to validate the results.
       
  • A novel internal validity index based on the cluster centre and the
           nearest neighbour cluster
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Shibing Zhou, Zhenyuan Xu It is crucial to evaluate the clustering quality in cluster analysis. In this paper, a new internal cluster validity index based on the cluster centre and the nearest neighbour cluster is designed according to the geometric distribution of objects. Moreover, a method for determining the optimal number of clusters is proposed. The new methodology can evaluate the clustering results produced by a certain clustering algorithm and determine the optimal number of clusters for a given dataset. Theoretical research and experimental results indicate the validity and good performance of the proposed index and method.
       
  • Securing the operations in SCADA-IoT platform based industrial control
           system using ensemble of deep belief networks
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Shamsul Huda, John Yearwood, Mohammad Mehedi Hassan, Ahmad Almogren Internet of Things (IoTs) platform is increasingly being used in modern industries. Billions of devices with smart sensing capabilities, PLCs, actuators, intelligent electronic devices (IEDs) of industrial control systems (ICS) and supervisory control and data acquisition (SCADA) network are connected over IoT platform. IoT platform has facilitated modern industries an efficient monitoring and controlling of physical systems (various hardware and machineries) resulting in an intelligent data acquisition, processing and highly productive and profitable management of business. Initially, these devices have been deployed without any security concern considering these will run in isolated networks. With the new IoT platform scenario, SCADA based ICS networks are integrated with the corporate networks over the internet. Therefore, the devices of a SCADA network are facing significant threat of malicious attacks either through the vulnerabilities of the corporate network or the devices used in the SCADA. Traditional IT security software products are not enough for ICS as these software products consider only operating system related calls and application program interface (API) behaviour of applications, which are only focused on corporate business solutions and related technologies. In this paper, we propose a secure architecture for ICS network that proposes a detection model based on SCADA network traffic. The proposed architecture develops two ensembles based detection algorithms using deep belief network (DBN) and standard classifier, including support vector machines (SVM). The novelty of the proposed architecture is that it uses network traffic feature and payload feature for detection model instead of conventional signature based or API based malware detection technique. In addition, ensemble-DBN of the proposed architecture can overcome many limitations of standard techniques, including the complexity and big size of the training data.The proposed architecture for ICS has been verified using a real SCADA network data. Experimental results show that our ensemble based detection system outperforms over existing attack detection engines.
       
  • A modified crow search algorithm (MCSA) for solving economic load dispatch
           problem
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Farid Mohammadi, Hamdi Abdi This paper presents a novel evolutionary optimization algorithm namely the modified crow search algorithm (MCSA) for solving the non-convex economic load dispatch (ELD) problem which improves the crow search algorithm (CSA) by an innovative selection of the crows and adaptive adjustment of the flight length. MCSA is a population-based technique based on the intelligent behavior of the crows in finding food sources. In MCSA, each crow saves its food in hiding-places for the time it needs. Also, each crow searches environment to find the better foods by stealthily following other crows to discover their hiding-places. The proposed MCSA develops the search capability of crows in the original CSA and introduces a new way by which a destination is selected by a crow to follow. To indicate the applicability of MCSA in the ELD problem, it is applied on five different well-known test systems. The results are compared in terms of the solution quality, robustness, and computing time with other methods implying that the proposed method has a superior performance than the other techniques.
       
  • An integrated approach to bearing prognostics based on EEMD-multi feature
           extraction, Gaussian mixture models and Jensen-Rényi divergence
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Akhand Rai, S.H. Upadhyay This paper proposes an integrated bearing prognostics approach that is divided into two components: The first component develops a new health indicator (HI) for performance degradation assessment (PDA) of bearings. Many existing HIs suffer from certain shortcomings such as insensitiveness to incipient defects and a highly fluctuating behavior with the increase in degradation severity. To overcome these disadvantages, a HI based on Gaussian mixture models (GMM) and Jensen-Rényi Divergence (JRD) is suggested, which retains its monotonicity as the bearing condition deteriorates. Firstly, the acquired vibration signals are decomposed by ensemble empirical mode decomposition (EEMD) to extract the fault features i.e. singular values and energy moment entropies of the intrinsic mode functions (IMFs). Secondly, the feature vectors under healthy conditions are used to train the GMM. Thirdly, the test feature vectors are supplied to the trained GMM to calculate the components’ posterior probabilities. Fourthly, JRD measure is used to discriminate the defective posterior probability distributions from the healthy ones. Finally, the JRD is converted into confidence value (CV) to realize bearing PDA. The second component utilizes the CV values to train the support vector regression (SVR) model. The particle swarm optimization (PSO) technique is implemented to attain the optimal values of SVR hyperparameters. The optimal SVR model is then used to forecast the CV upto predefined failure threshold and evaluate the bearing remaining useful life (RUL). The experimental results verify that the proposed CV performs better than the GMM-negative log likelihood probability (NLLP) for RUL estimation of bearings.
       
  • Multi-criteria group decision-making based on quasi-order for dual
           hesitant fuzzy sets and professional degrees of decision makers
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Zhiliang Ren, Zeshui Xu, Hai Wang Dual hesitant fuzzy set (DHFS) is a useful tool that can assist the decision makers (DMs) to express their evaluation information more flexibly from two contrary points of view, which are the membership degrees and the non-membership degrees. In this paper, we mainly focus on the quasi-order for DHFSs and the professional degrees of the DMs in multi-criteria group decision-making (MCGDM) under dual hesitant fuzzy information. First, we define a quasi-order for DHFSs based on pairwise comparisons. The quasi-order could distinguish two different DHFSs more clearly and rationally. Second, we propose the professional degrees of the DMs based on dual hesitant fuzzy information and use the professional degrees to determine the weights of the DMs. The professional degrees could indicate the weights of the DMs from different viewpoints in MCGDM more reliably. In the process of decision-making, the professional degree of each DM to the decision-making problem could influence the final decision directly. On the one hand, the evaluation information has a close-knit relationship with the professional degree of each DM. On the other hand, the professional degrees of the DMs should be a very important aspect to determine the weights of the DMs directly. In addition, we combine the quasi-order and the professional degrees in MCGDM. Finally, we use a practical numerical example on the rural land conversion to illustrate the availability and practicability of the proposed MCGDM model.
       
  • Hardware efficient FIR filter design using Global Best Steered Quantum
           Inspired Cuckoo Search Algorithm
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Poulami Das, Sudip Kumar Naskar, Sankar Narayan Patra In this paper, a new algorithm namely Global Best Steered Quantum Inspired Cuckoo Search Algorithm (GQICSA) is proposed for obtaining optimized set of coefficients to implement Finite Impulse Response (FIR) Filter. Adder cost of a filter is estimated after quantizing the filter coefficients followed by Common Sub-expression Elimination (CSE). We found from the simulation results that reduction in word length of coefficients does not make the filters fail to achieve the ideal frequency response. Moreover, filters developed using GQICSA outperform the benchmark filters designed by Parks McClellan Algorithm in terms of stopband attenuation. Analysis of the results reveal that GQICSA not only improves over various conventional algorithms including Cuckoo Search Algorithm (CSA), it also surpasses modified version of CSA, Quantum Inspired CSA (QICSA) updated using quantum principles, for optimizing filter coefficients to design lower hardware costing filter without compromising the filter responses and efficiency. GQICSA also provides significant improvements compared to CSA and QICSA in terms of stopband attenuation and execution time for higher order filter design. Efficiency of GQICSA over QICSA and conventional CSA is also proved with 16 benchmark functions.
       
  • Optimising latent features using artificial immune system in collaborative
           filtering for recommender systems
    • Abstract: Publication date: October 2018Source: Applied Soft Computing, Volume 71Author(s): Mlungisi Duma, Bhekisipho Twala In collaborative filtering, the stochastic gradient descent (SGD) method is used to determine the latent features used in producing a non-negative N x M matrix of user-item ratings. The method is commonly used because it is straightforward in implementation and has a relatively fast running time. In this paper, we propose an artificial immune system approach to matrix factorization (AISMF) to optimise the latent features during the learning process. Artificial immune systems have the advantage of being dynamic, adaptive and able to learn an antigen in a few cycles. Therefore, they are well suited for the collaborative filtering of recommender systems.The performance of the AISMF is compared to that of the user-based and item-based neighbourhood clustering methods, SGD, Slope-one and Tendency-based methods. The results show that the AISMF converges faster to local minima for small to medium sized datasets and the AISMF ensemble performs better and faster, on average, on large datasets. The results also show that the AISMF ensemble is comparable to that of the SGD, user-based, item-based, Slope-one and Tendency-based methods in CF and can be used as an alternative learning and recommendation method in CF.
       
  • Efficient fog prediction with multi-objective evolutionary neural networks
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): A.M. Durán-Rosal, J.C. Fernández, C. Casanova-Mateo, J. Sanz-Justo, S. Salcedo-Sanz, C. Hervás-Martínez This paper proposes the application of novel artificial neural networks with evolutionary training and different basic functions (sigmoidal, product and radial), for a real problem of fog events classification from meteorological input variables. Specifically, a Multiobjective Evolutionary Algorithm is considered as artificial neural network training mechanism in order to obtain a binary classification model for the detection of fog events at Valladolid airport (Spain). The evolutionary neural models developed are based on two-dimensional performance measures: traditional accuracy and the minimum sensitivity, as the lowest percentage of examples correctly predicted as belonging to each class with respect to the total number of examples in the corresponding class. These performance measures are directly related to features associated with any classifier: its global performance and the rate of the worst classified class. These two objectives are usually in conflict when the optimization process tries to construct models with a high classification rate level in the generalization dataset, and also with a good classification level for each class or minimum sensitivity. A sensitivity analysis of the proposed models is carried out, and thus the subjacent relations between the input variables and the output classification target can be better understood.Graphical abstractGraphical abstract for this article
       
  • Solving the dynamic weapon target assignment problem by an improved
           artificial bee colony algorithm with heuristic factor initialization
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Tianqing Chang, Depeng Kong, Na Hao, Kehu Xu, Guozhen Yang Dynamic weapon target assignment (DWTA) is an effective method for solving the problem of battlefield firepower optimization in multiple stages and multiple rounds. The resolving time of the DWTA affects current allocation results and assignment results in the next round. Aiming at the slow convergence rate and the low search efficiency in solving DWTA, this paper proposes an improved artificial bee colony (ABC) algorithm with a new initialization method utilizing rule-based heuristic factors. The traditional ABC algorithm converges slowly and easily falls into local extremum. Therefore, in the study, we firstly put forward an improved ABC algorithm based on ranking selection and elite guidance to improve the search efficiency. Secondly, aiming at the low quality of the initial solution generated randomly, we put forward 4 kinds of rule-based heuristic factors: heuristic factor based on weapon-choice-priority, heuristic factor based on target-choice-priority, heuristic factor based on target-choice-priority with a random sequence, and heuristic factor based on target-choice-priority with a random sequence and Cannikin Law. The heuristic factors are used in population initialization to improve the quality of initial solutions. Finally, the heuristic factor initialization method is combined with the improved ABC algorithm to solve the DWTA problem with the integer encoding according to the characteristics of DWTA. A comparative experiment of different algorithms for solving the DWTA problem with different scales was carried out. The experimental results showed that the improved ABC algorithm combined with heuristic factor initialization could get the high-quality initial solution, accelerate the solution process, and improve the accuracy in solving DWTA.
       
  • Adaptive region adjustment to improve the balance of convergence and
           diversity in MOEA/D
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Peng Wang, Bo Liao, Wen Zhu, Lijun Cai, Siqi Ren, Min Chen, Zejun Li, Keqin Li The multiobjective evolutionary algorithm based on decomposition (MOEA/D), which decomposes a multiobjective optimization problem (MOP) into a number of optimization subproblems and optimizes them in a collaborative manner, becomes more and more popular in the field of evolutionary multiobjective optimization. The mechanism of balance convergence and diversity is very important in MOEA/D. In the process of optimization, the chosen solutions must be distinctive and as close as possible to the Pareto front. In this paper, we first explore the relation between subproblems and solutions. Then we propose the adaptive region adjustment strategy to balance the convergence and diversity based on the objective region partition concept. Finally, this strategy is embedded in the MOEA/D framework and then a simple but efficient algorithm is proposed. To demonstrate the effectiveness of the proposed algorithm, comprehensive experiments have been designed. The simulation results show the effectiveness of our proposed algorithms.
       
  • A novel stochastic fractal search algorithm for optimal allocation of
           distributed generators in radial distribution systems
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Tri Phuoc Nguyen, Dieu Ngoc Vo This paper presents an application of a novel stochastic fractal search algorithm (SFSA) for solving the optimal allocation of distributed generators (OADG) problem in radial distribution systems (RDSs). The SFSA method is a newly efficient meta-heuristic algorithm inspired by the growing process of nature by using a mathematical idea known as fractal for solving optimization problems. The advantages of the SFSA method are simple implementation and few control parameters. The method is proposed to deal with the OADG problem by minimizing the active power loss, improving the voltage profile, and increasing the voltage stability while satisfying various constraints including active and reactive power balance, bus voltage limits, DG capacity limits, branch current limits, and DG penetration limit. The effectiveness of the proposed SFSA method has been verified on the IEEE 33-bus, 69-bus, and 118-bus RDSs and the obtained results have been validated via comparing to those from other methods in the literature. The result comparisons from the test systems have indicated that the proposed method can obtain higher quality solutions than many other methods for the considered scenarios from the test systems. Therefore, the proposed SFSA can be a very effective and favorable method for solving the OADG problem.Graphical abstractGraphical abstract for this article
       
  • A Mahalanobis distance based algorithm for assigning rank to the predicted
           fault prone software modules
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Subhashis Chatterjee, Bappa Maji This article proposes a methodology based on Artificial Neural Network(ANN) and type-2 fuzzy logic system (FLS) for detecting the fault prone software modules at early development phase. The present research concentrates on software metrics from requirement analysis and design phase of software life cycle. A new approach has been developed to sort out degree of fault proneness (DFP) of the software modules through type-2 FLS. ANN is used to prepare the rule base for inference engine. Furthermore, the proposed model has induced an order relation among the fault prone modules (FPMs) with the help of Mahalanobis distance (MD) metric. During software development process, a project manager needs to recognize the fault prone software modules with their DFP. Hence, the present study is of great importance to the project personnel to develop more cost-effective and reliable software. KC2 dataset of NASA has been applied for validating the model. Performance analysis clearly indicates the better prediction capability of the proposed model compared to some existing similar models.Graphical abstractGraphical abstract for this article
       
  • A two-stage algorithm for network reconstruction
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Jianshe Wu, Hedi Yang, Yanhua Ren, X. Rong Li The topology of a network is crucial to its function and behavior. In many cases, various data are obtained from the network, for example, information spreading data, gene expression microarray data, game data, but the topology of the network is unknown. Reconstructing the topology of the network from the observed data is meaningful in many applications. In this paper an evolutionary algorithm is proposed for network reconstruction from observed game data. The proposed two-stage evolutionary algorithm decomposes the network reconstruction problem as sequentially reconstructing the edges of the nodes. The edges of a node are described by the corresponding column vector of the network adjacency matrix. In the first stage, possible vectors are obtained from the proposed genetic algorithm. In the second stage, the true vector is obtained by the proposed heuristic local search. Both analyses and experiments show that the proposed evolutionary algorithm is more accurate and applicable in more general cases than the algorithm based on compressive sensing theory.
       
  • Stacking ensemble with parsimonious base models to improve generalization
           capability in the characterization of steel bolted components
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): A. Pernía-Espinoza, J. Fernandez-Ceniceros, J. Antonanzas, R. Urraca, F.J. Martinez-de-Pison This study presents a new soft computing method to create an accurate and reliable model capable of determining three key points of the comprehensive force–displacement curve of bolted components in steel structures. To this end, a database with the results of a set of finite element (FE) simulations, which represent real responses of bolted components, is utilized to create a stacking ensemble model that combines the predictions of different parsimonious base models. The innovative proposal of this study is using GA-PARSIMONY, a previously published GA-method which searches parsimonious models by optimizing feature selection and hyperparameter optimization processes. Therefore, parsimonious solutions created with a variety of machine learning methods are combined by means of a nested cross-validation scheme in a unique meta-learner in order to increase diversity and minimize the generalization error rate. The results reveal that efficiently combining parsimonious models provides more accurate and reliable predictions as compared to other methods. Thus, the informational model is able to replace costly FE simulations without significantly comprising accuracy and could be implemented in structural analysis software.
       
  • Dynamic neural network-based feedback linearization control of full-car
           suspensions using PSO
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Jimoh O. Pedro, Muhammed Dangor, Olurotimi A. Dahunsi, M. Montaz Ali This paper proposes a nonlinear control approach using dynamic neural network-based input–output feedback linearization to resolve the inherent conflicting performance criteria for a full-car nonlinear electrohydraulic active vehicle suspension system. Particle swarm optimization is applied both for the dynamic neural network models’ trainings and the computation of the controllers’ parameters. The intelligent control scheme outperformed the passive vehicle suspension system and the benchmark particle swarm-optimized proportional+integral+derivative controller. Effectiveness and robustness of the proposed controller are demonstrated through simulations both in time- and frequency-domains.Graphical abstractGraphical abstract for this article
       
  • Hybrid optimizers to solve a tri-level programming model for a tire
           closed-loop supply chain network design problem
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Amir Mohammad Fathollahi-Fard, Mostafa Hajiaghaei-Keshteli, Seyedali Mirjalili The Closed-loop Supply Chain (CLSC) is one way to collect and recycle scraped tires. CLSC consists of a set of suppliers, manufacturers, distributors and customers in a forward system. Collectors and recyclers are formed a reverse one as well. The decisions in such systems are adopted in a hierarchy. Regarding this issue, this study develops a tri-level programming model to design the location-allocation of the tire CLSC for the first time. The proposed model is formulated on the static Stackelberg game between manufacturers, distributors and collectors in the framework of CLSC. The performance of the current exact solutions for this problem suffers from degraded performance when solving large-scale problems. To alleviate this drawback, this study proposes a number of new hybrid optimizers by considering the benefits of recent metaheuristics. An extensive comparative study confirms the efficiency of developed model and the performance of the hybrid optimizers proposed when solving the large-scale problems.Graphical abstract(A) The scraped tires may be left in nature to pollute the environment (this photo is taken from our case study). (B) The graphical structure of tire closed-loop supply chain.Graphical abstract for this article
       
  • A multi-objective particle swarm optimization algorithm based on dynamic
           boundary search for constrained optimization
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Mohamad Zihin bin Mohd Zain, Jeevan Kanesan, Joon Huang Chuah, Saroja Dhanapal, Graham Kendall Due to increased search complexity in multi-objective optimization, premature convergence becomes a problem. Complex engineering problems poses high number of variables with many constraints. Hence, more difficult benchmark problems must be utilized to validate new algorithms performance. A well-known optimizer, Multi-Objective Particle Swarm Optimizer (MOPSO), has a few weakness that needs to be addressed, specifically its convergence in high dimensional problems and its constraints handling capability. For these reasons, we propose a modified MOPSO (M-MOPSO) to improve upon these aspects. M-MOPSO is compared with four other algorithms namely, MOPSO, Multi-Objective Grey Wolf Optimizer (MOGWO), Multi-Objective Evolutionary Algorithm based on Decompositions (MOEA/D) and Multi-Objective Differential Evolution (MODE). M-MOPSO emerged as the best algorithm in eight out of the ten constrained benchmark problems. It also shows promising results in bioprocess application problems and tumor treatment problems. In overall, M-MOPSO was able to solve multi-objective problems with good convergence and is suitable to be used in real world problem.Graphical abstractGraphical abstract for this article
       
  • Application of GRNN and multivariate hybrid approach to predict and
           optimize WEDM responses for Ni-Ti shape memory alloy
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Himadri Majumder, Kalipada Maity The current investigation aims at highlighting the application potential of a smart approximation tool, which is general regression neural network (GRNN) during machining of nickel-titanium (Ni-Ti) shape memory alloy using WEDM. A series of experiments were accomplished based on Taguchi’s orthogonal layout design. Pulse on time, discharge current, wire tension, wire speed and flushing pressure were considered as five distinct WEDM parameters, whereas arithmetic mean roughness, maximum peak to valley height, root mean square roughness, and micro-hardness were selected as the major responses to be investigated. The aforementioned WEDM responses were predicted with the help of the projected GRNN model and compared with the experimental results. The investigation was further extended to ascertain the optimum combination of input parameters using a hybrid approach. This was done by combining VIKOR method with the Fuzzy logic system. The prediction error of the GRNN model was noted as ±5% within the studied range of machining parameters. Finally, the adequacy of the multivariate VIKOR-Fuzzy approach was verified by performing confirmation test which exhibited improvement in WEDM responses.
       
  • Spatially distributed statistical significance approach for real parameter
           tuning with restricted budgets
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Adolph J. Vogel, Daniel N. Wilke Parameter tuning aims to find suitable parameter values for heuristic optimisation algorithms that allows for the practical application of such algorithms. Conventional tuning approaches view the tuning problem as two distinct problems, namely, a stochastic problem to quantify the performance of a parameter vector and a deterministic problem for finding improved parameter vectors in the meta-design space. A direct consequence of this viewpoint is that parameter vectors are sampled multiple times to resolve their respective performance uncertainties. In this study we share an alternative viewpoint, which is to consider the tuning problem as a single stochastic problem for which both the spatial location and performance of the optimal parameter vector are uncertain. A direct implication, of this alternative stance, is that every parameter vector is sampled only once. In our proposed approach, the spatial and performance uncertainties of the optimal parameter vector are resolved by the spatial clustering of candidate parameter vectors in the meta-design space. In a series of numerical experiments, considering 16 test problems, we show that our approach, Efficient Sequential Parameter Optimisation (ESPO), outperforms both F/Race and Sequential Parameter Optimisation (SPO), especially for tuning under restricted budgets.Graphical abstractGraphical abstract for this article
       
  • Failure mode and effects analysis using Dempster-Shafer theory and TOPSIS
           method: Application to the gas insulated metal enclosed transmission line
           (GIL)
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Zhen Wang, Jian-Min Gao, Rong-Xi Wang, Kun Chen, Zhi-Yong Gao, Yao Jiang Failure mode and effects analysis (FMEA) as a useful tool in safety and reliability analysis to identify and eliminate known or potential failures from the process of product design, development and production has been widely used in a variety of industries. However, in traditional FMEA, there are two big flaws that of the acquisition and aggregation of experts’ diversity evaluations and the calculation of the risk priority number of failure modes. In our paper, an improved FMEA approach based on Dempster-Shafer Theory (DST) and Technique for Ordering Preference by Similarity to Ideal Solution (TOPSIS) method is proposed to dispose the flaws for improving the effectiveness of traditional FMEA. In the proposed FMEA approach, a new aggregation method on the basis of DST is introduced for aggregating various kind of evaluations, then the TOPSIS method is utilized to rank the failure modes according to the evaluations in relation to their severity, occurrence and detectability. Based on DST and TOPSIS method, the proposed FMEA approach can well capture and aggregate diversity evaluations of experts and rank the failure modes under a compromise way with the condition of various uncertainties. Finally, the proposed FMEA approach is applied to the hazard analysis of the failure modes identified in a gas insulated metal enclosed transmission line (GIL) and gets a good result which consistent with practical engineering background. By the analysis and comparison with other methods, we find that the proposed approach is more applicable in real application.
       
  • Adaptive critic-based quaternion neuro-fuzzy controller design with
           application to chaos control
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Pouria Tooranjipour, Ramin Vatankhah Neuro-fuzzy control structures despite all of the advantages from both neural networks features and fuzzy inference engines always get in trouble due to a large number of fuzzy rules which is because of the high order of the system or the large number of divisions considered for each input. In this paper, a new adaptive neuro-fuzzy controller is proposed based on the quaternion numbers, and thus the mentioned problem of large rule numbers is solved by using the quaternion back propagation concept. Furthermore, utilizing reinforcement learning which assesses output value produced by a critic is another strength of the proposed method. Finally, in order to show the superiority and effectiveness of the proposed controller in comparison with conventional neuro-fuzzy ones, a complex and challenging chaos control problem which is a chaotic spinning disk control is provided.
       
  • Hyperspectral band selection based on multi-objective optimization with
           high information and low redundancy
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Mingyang Zhang, Maoguo Gong, Yongqiang Chan For band selection in hyperspectral images, there are two crucial aspects, information preservation and redundancy reduction, which have a great influence on the subsequent applications. Most of current works have a fixed preference on these two aspects. However, since different data sets possess different numerical characteristics, optimal preferences on these two aspects may be different and difficult to be decided. Therefore, a method is required, which can explore optimal trade-offs of these two aspects according to different characteristics of data sets. To address this challenge, a novel multi-objective optimization model for band selection is proposed. Two conflicting objective functions compose the proposed model. One measures the amount of information and the other one measures the redundancy contained in the selected bands. Through this model, the two aspects are quantified, which makes it possible to optimize them simultaneously. To optimize this model, a new multi-objective immune algorithm is designed to fit the characteristics of hyperspectral data. And it has the ability to obtain a series of Pareto optimal solutions which represent different optimal trade-offs between the two objective functions. In this way, the proposed method can explore the optimal trade-offs between the two aspects and provide decision makers more options according to different characteristics of data sets. Experiments are implemented on three real hyperspectral data sets. The results show the superiority of the proposed method.Graphical abstractGraphical abstract for this article
       
  • A modified Bee Colony Optimization (MBCO) and its hybridization with
           k-means for an application to data clustering
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Pranesh Das, Dushmanta Kumar Das, Shouvik Dey Among the nature inspired heuristic or meta-heuristic optimization algorithms, Bee Colony Optimization (BCO) algorithms are widely used to solve clustering problem. In this paper, a modified BCO (MBCO) approach is proposed for data clustering. In the proposed MBCO, the forgiveness characteristics of bees and giving a fair chance to both trustworthy and untrustworthy bees are being taken care of. A probability based selection (Pbselection) approach is introduced in the proposed MBCO for assigning unallocated data points in every iteration. The result shows that, the proposed method gives faster convergence as compared to the existing well known algorithms. In order to improve the performance of MBCO further and to obtain global optimal and diverse solution, the proposed MBCO is hybridized with k-means algorithm. In average, the hybridized MKCLUST and KMCLUST give same or better result than the proposed MBCO. To validate the proposed algorithms, seven standard data sets are considered. From classification error percentages calculation, it is observed that the proposed algorithms perform better compared to some existing algorithms. The simulation results infer that the proposed algorithms can be efficiently used for data clustering.Graphical abstractGraphical abstract for this article
       
  • Co-evolutionary multi-task learning for dynamic time series prediction
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Rohitash Chandra, Yew-Soon Ong, Chi-Keong Goh Time series prediction typically consists of a data reconstruction phase where the time series is broken into overlapping windows known as the timespan. The size of the timespan can be seen as a way of determining the extent of past information required for an effective prediction. In certain applications such as the prediction of wind-intensity of storms and cyclones, prediction models need to be dynamic in accommodating different values of the timespan. These applications require robust prediction as soon as the event takes place. We identify a new category of problem called dynamic time series prediction that requires a model to give prediction when presented with varying lengths of the timespan. In this paper, we propose a co-evolutionary multi-task learning method that provides a synergy between multi-task learning and co-evolutionary algorithms to address dynamic time series prediction. The method features effective use of building blocks of knowledge inspired by dynamic programming and multi-task learning. It enables neural networks to retain modularity during training for making a decision in situations even when certain inputs are missing. The effectiveness of the method is demonstrated using one-step-ahead chaotic time series and tropical cyclone wind-intensity prediction.Graphical abstractGraphical abstract for this article
       
  • A parallel multi-objective particle swarm optimization for cascade
           hydropower reservoir operation in southwest China
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Wen-jing Niu, Zhong-kai Feng, Chun-tian Cheng, Xin-yu Wu Due to the expanding system scale and increasing operational complexity, the cascade hydropower reservoir operation balancing benefit and firm output is becoming one of the most important problems in China’s hydropower system. To handle this problem, this paper presents a parallel multi-objective particle swarm optimization where the swarm with large population size is divided into several smaller subswarms to be simultaneously optimized by different worker threads. In each subtask, the multi-objective particle swarm optimization is adopted to finish the entire evolutionary process, where the leader particles, external archive set and computational parameters are all dynamically updated. Besides, a novel constraint handling strategy is used to identify the feasible search space while the domination strategy based on constraint violation is used to enhance the convergence speed of swarm. The presented method is applied to Lancang cascade hydropower system in southwest China. The results show that PMOPSO can provide satisfying scheduling results in different cases. For the variation coefficient of generation in 30 independent runs, the presented method can bettered the serial algorithm with about 66.67% and 61.29% reductions in normal and dry years, respectively. Hence, this paper provides an effective tool for multi-objective operation of cascade hydropower system.
       
  • Six sigma robust multi-objective optimization modification of machine-tool
           settings for hypoid gears by considering both geometric and physical
           performances
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Han Ding, Jinyuan Tang With the increasing demands of low noise and high strength from gear transmission system in industry applications, a collaborative optimization considering both geometric and physical performances has been increasingly significant for high-performance complex manufacturing of the hypoid gears. More recently, the machine-tool setting modification has provided an important access to this optimization design. However, its data-driven robustness or reliability is of a great difficulty. To deal with this problem, this paper presents a six sigma (6σ) robust multi-objective optimization (MOO) modification of machine-tool settings. Firstly, the 6σ robust optimization formulation is applied in the numerical result evaluations. Then, a novel data-driven model for MOO modification of machine-tool settings is established by establishing the functional relationships between the machine-tool settings and the performance evaluations, respectively. They can be integrated into a 6σ robust MOO machine-tool setting modification for hypoid gears having higher quality requirements. Finally, with the decision and optimization process, an achievement function approach was applied to solve MOO modification for the Pareto front, and the sensitivity-based variability estimation is used to identify the robust solution. The numerical applications are given to verify the proposed methodology.
       
  • Bio-inspired neurocomputing with 256 noise oscillators simulating photo
           response of Euglena cells
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Kazunari Ozasa, June Won, Simon Song, Mizuo Maeda We developed a bio-inspired neurocomputing approach based on our earlier biological neurocomputer, which leverages the survival strategies of living micro-algae cells (Euglena gracilis) to soft computing. Instead of using the real living cells, the bio-inspired neurocomputing in this study (namely, Euglena-inspired neurocomputing) mimics the photophobic responses of the cells using photo-responsive (PR) noise oscillators. The PR noise oscillators play the role of neurons during computation and their output signals are autonomously changed both by noise generation and firing of the neuron. The Euglena-inspired neurocomputing achieved a high performance in searching for multiple solutions continuously and autonomously for a combinatorial optimization problem, 16-city TSP as instance. We analyzed the temporal evolution of the computation and its dependence on the parameter set of the PR noise oscillators and identified the source of the high performance as the trade-off between noise amplitude and the reduction ratio of the oscillators. We next introduced two specific survival strategies observed in the real Euglena cells to PR noise oscillators, and elucidated their positive effects on the performance. The Euglena-inspired neurocomputing examined in this study can be used to address dynamically changing optimization problems, since the computation tracks changes in the imposed conditions by autonomous and non-converged searching for the solutions.
       
  • Algorithmic financial trading with deep convolutional neural networks:
           Time series to image conversion approach
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Omer Berat Sezer, Ahmet Murat Ozbayoglu Computational intelligence techniques for financial trading systems have always been quite popular. In the last decade, deep learning models start getting more attention, especially within the image processing community. In this study, we propose a novel algorithmic trading model CNN-TA using a 2-D convolutional neural network based on image processing properties. In order to convert financial time series into 2-D images, 15 different technical indicators each with different parameter selections are utilized. Each indicator instance generates data for a 15 day period. As a result, 15 × 15 sized 2-D images are constructed. Each image is then labeled as Buy, Sell or Hold depending on the hills and valleys of the original time series. The results indicate that when compared with the Buy & Hold Strategy and other common trading systems over a long out-of-sample period, the trained model provides better results for stocks and ETFs.
       
  • Flexible job shop scheduling problem with interval grey processing time
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Naiming Xie, Nanlei Chen Job processing time is not always given accurately as a time quota in a production system, particularly in the complex products manufacturing process. It is meaningful to study novel model and algorithm based on uncertainty processing time so as to solve uncertainty job shop scheduling problems. This paper defines inaccurate time quota as interval grey processing time and subsequently proposes a novel uncertainty job shop scheduling model with the objective of minimizing the interval grey makespan. By defining the arithmetic operations and Gantt chart of interval grey processing time, the elitism genetic algorithm coupling elitism strategy in external memory is designed. Finally, the proposed algorithm is tested with different size cases of composite components job-shop scheduling. Results show the proposed algorithm is suitable for solving the uncertainty job shop scheduling problem with interval grey processing time.
       
  • Effective multi-objective discrete optimization of Truss-Z layouts using a
           GPU
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Machi Zawidzki, Jacek Szklarski Truss-Z (TZ) is an Extremely Modular System for creating skeletal free-form ramps and ramp networks. The TZ structures are comprised of four variations of two types of basic unit subjected to rotation. The two types of units are: R and L being a mirror reflection of each other. This paper presents a novel method based on image processing, evolutionary algorithm and intensive parallelization of multi-objective optimization of TZ layouts.The algorithm returns a sequence of modules. The result guarantees a TZ connection between two given points (regions) and minimizes the fitness function representing certain costs associated with setting up the TZ structure.The fitness function depends on the cost of TZ structure as well as the variety of costs related to the environment where the it is to be placed. E.g.: the earthworks, vegetation removal, obstacles avoidance, etc. There are no restrictions on the fitness function definition. It can depend on any variable which can be represented by a two-dimensional map of any property of the environment.The formulation of the presented method is suited for application of well-established image processing methods which efficiently evaluate candidate solutions on a GPU. As a result, the employed genetic algorithm efficiently probes the search space. The practical applicability of this approach is demonstrated with three case-studies:1simultaneous paving of a path with congruent units in a hilly environment with trees & bushes and finding the best location for a pier over an existing river;2constructing of a TZ connector spanning over a mountain valley with lakes (where supports can not be placed);3retrofitting of an existing railway station with a large wheelchair TZ ramp of over 10 m elevation while preserving trees and minimizing the earthworks.
       
  • Hybrid force/position control of robotic arms manipulating in uncertain
           environments based on adaptive fuzzy sliding mode control
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Abbas Karamali Ravandi, Esmaeel Khanmirza, Kamran Daneshjou In this study, the hybrid force/position control of robotic manipulators operating in uncertain environments is addressed by integrating the fuzzy logic with conventional sliding mode control (SMC). After decomposition of the manipulator dynamics into position, force, and redundant joint subspaces, the universal approximation capability of fuzzy systems is employed to approximate the equivalent part of the control input constructed based on SMC concept. The robust part of the controller is estimated by an adaptive PI controller to compensate for deviations due to the presence of model uncertainties and perturbations. Furthermore, some adaptation laws are derived for updating the parameters online when some changes in the system dynamics are made. The proposed adaptive fuzzy sliding mode control (AFSMC) requires the minimum information about the manipulator dynamic structure and environment physical properties among the other hybrid force/position control methods presented so far. Indeed, it needs neither estimation of the dynamic model nor bounds of uncertainties in advance. The asymptotic stability of the proposed controller is also proved in the sense of Lyapunov theorem. The simulation results show the good performance of the proposed controller in coping with uncertainties. The proposed scheme is also compared with standard SMC methodology, and its superior robustness is shown in comparison with those methods which require an estimation of the plant mathematical model.
       
  • Multiobjective evolutionary optimization of traffic flow and pollution in
           Montevideo, Uruguay
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Matias Péres, Germán Ruiz, Sergio Nesmachnow, Ana C. Olivera Traffic congestion and pollution are important problems in modern cities. As improving traffic flow via infrastructure modifications is expensive and intrusive, approaches using simulations emerge as economic alternatives to test different policies, with less negative impact on cities. This article proposes a specific methodology combining simulation and multiobjective evolutionary methods to simultaneously optimize traffic flow and vehicular emissions via traffic lights planning in urban areas. The experimental evaluation is performed over three real areas in Montevideo (Uruguay). Significant improvements on travel times and pollution are reported over the current configuration of traffic lights cycles and also over other traffic regulation techniques. Moreover, the multiobjective approach provides policy-makers with a set of alternatives to choose from, allowing the evaluation of several scenarios and the dynamic modification of traffic light cycles.
       
  • Soft Computing based object detection and tracking approaches:
           State-of-the-Art survey
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Manisha Kaushal, Baljit S. Khehra, Akashdeep Sharma In recent years, analysis and interpretation of video sequences to detect and track objects of interest had become an active research field in computer vision and image processing. Detection and tracking includes extraction of moving object from frames and continuous tracking it thereafter forming persistent object trajectories over time. There are some really smart techniques proposed by researchers for efficient and robust detection or tracking of objects in videos. A comprehensive coverage of such innovative techniques for which solutions have been motivated by theories of soft computing approaches is proposed. The main objective of this research investigation is to study and highlight efforts of researchers who had conducted some brilliant work on soft computing based detection and tracking approaches in video sequence. The study is novel as it traces rise of soft computing methods in field of object detection and tracking in videos which has been neglected over the years. The survey is compilation of studies on neural network, deep learning, fuzzy logic, evolutionary algorithms, hybrid and recent innovative approaches that have been applied to field of detection and tracking. The paper also highlights benchmark datasets available to researchers for experimentation and validation of their own algorithms. Major research challenges in the field of detection and tracking along with some recommendations are also provided. The paper provides number of analyses to guide future directions of research and advocates for more applications of soft computing approaches for object detection and tracking approaches in videos. The paper is targeted at young researchers who will like to see it as platform for introduction to a mature and relatively complex field. The study will be helpful in appropriate use of an existing method for systematically designing a new approach or improving performance of existing approaches.Graphical abstractGraphical abstract for this article
       
  • Two effective simulated annealing algorithms for the Location-Routing
           Problem
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Kamyla Maria Ferreira, Thiago Alves de Queiroz Two heuristics based on the simulated annealing method are presented for solving the capacitated version of the Location-Routing Problem. The first heuristic has four components, which are: generate an initial solution from using a greedy procedure based on allocating customers to the closest facilities; apply four neighborhood operators based on swap and insertion moves; perturb the best solution by solving 0-1 knapsack problems and, then, improve the resultant solution with the Lin-Kernighan heuristic; and, diversify the solution into closing and opening different facilities. The second heuristic is a simplified version of the first one in which the third component is disregarded because it may be time-consuming. Computational experiments conducted on three sets of benchmark instances show that the two heuristics are very competitive and have outperformed other methods published in the literature. They presented an overall average gap of 0.17% and 0.13%, respectively, and improved previous solutions within a reasonable computing time for nine large instances.
       
  • A novel phase angle-encoded fruit fly optimization algorithm with mutation
           adaptation mechanism applied to UAV path planning
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Xiangyin Zhang, Xingyang Lu, Songmin Jia, Xiuzhi Li This paper proposed an improved version of fruit fly optimization (FOA) to solve the unmanned aerial vehicle (UAV) path planning problem. The improved algorithm combines the phase angle-encoded and mutation adaptation mechanisms into the basic FOA and is referred to as θ-MAFOA. Mutation adaptation mechanism is adopted to enhance the balance of FOA in terms of the exploitation and exploration ability, while phase angle-based encoded strategy for fruit fly locations helps to achieve the high performance in the convergence process. Then, the proposed θ-MAFOA is used to find the optimal flight path for UAV in the complex three-dimensional (3D) terrain environments with various ground defense weapons. B-Spline curve that connects the start and target points is employed to represent a smooth path. Several performance criteria and constraints are taken into consideration to evaluate the cost of UAV paths. Numerical experiments are carried out on various test scenarios and the results show the proposed θ-MAFOA is superior to the basic and other two modified versions of FOA, and also more powerful than several existing state-of-the-art optimization algorithms.Graphical abstractGraphical abstract for this article
       
  • Positive or negative solutions to first-order fully fuzzy linear
           differential equations under generalized differentiability
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): M. Chehlabi, T. Allahviranloo The purpose of this paper is to study solutions to a class of first-order fully fuzzy linear differential equations from the point of view of generalized differentiability. The article begins with considering and analyzing the positive or negative fuzzy-valued functions defined on a real interval. Some of the topics which are needed for the results of this study, such as continuity and differentiability of the product of fuzzy-valued functions and also the derivative of fuzzy-valued exponential functions have been explained, in details. Then, we obtain the solution functions and necessary and sufficient conditions of their existence based on the solutions found from systems of ordinary differential equations. Finally, some applications are presented and some examples are brought to show the desirable behavior of the obtained solutions.Graphical abstractGraphical abstract for this article
       
  • A multi-population algorithm for multi-objective knapsack problem
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Imen Ben Mansour, Matthieu Basseur, Frédéric Saubion Local search algorithms constitute a growing area of interest to approximate the Pareto sets of multi-objective combinatorial problem instances. In this study, we focus on the multi-objective knapsack problem and its optimization thanks to a multi-population based cooperative framework. The proposed approach, Wϵ-CMOLS, uses a multi-objective local search algorithm based on quality indicator (IBMOLS), initially presented by Basseur and Burke in 2007, and integrates it into a cooperative model. The idea is to optimize the overall quality of a Pareto set approximation by evolving several sub-populations in parallel, each population executing a different configuration of IBMOLS. The algorithm uses a weighted version of the epsilon quality indicator by means of different weight vectors. The populations cooperate through sharing a non-dominated archive, which stores the best compromises found during the optimization process, and which is used to re-initialize regularly each sub-population. Wϵ-CMOLS is compared with state-of-the-art algorithms such as IBEA, NSGA-II and SPEA2. Experiments highlight that the use of a cooperative model as well as a weighted indicator to guide the search toward different directions, can lead to interesting results for the multi-objective knapsack problem.
       
  • Bidimensional comparison of the extremely low frequency magnetic field
           ranges on the laptop surface
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Darko Brodić, Alessia Amelio, Radmila Janković In this paper, we present a new method for evaluating the similarity of measured and predicted data which is of great importance to validate a prediction model. In particular, we provide a new similarity measure, which is used for the comparison of bidimensional arrays. The new similarity measure is employed in the context of the extremely low frequency magnetic field emitted by laptops. Accordingly, a model predicts the magnetic field levels emitted by the laptop. The experiment is performed on laptops working in normal operating condition and under stress. Measured and predicted levels are projected on the top and bottom surface of the laptops realizing bidimensional dangerousness maps. The similarity measure is used for the comparison of measured and predicted maps in normal operating condition and under stress, considering the differences in the levels and their disposition in the two maps. At the end, the similarity measure is employed for comparing different prediction models. The obtained results track a new direction in measurement and validation of magnetic field data.
       
  • Robust predictive control of wheel slip in antilock braking systems based
           on radial basis function neural network
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Hossein Mirzaeinejad Anti-Lock Braking System (ABS) is a well-known technology for vehicle safety enhancement during hard braking. The wheel slip control has been a challenging problem due to a complex behavior of the tire and strong nonlinearity in a braking process. Furthermore, the system is subjected to unknown uncertainties that would arise from changing the vehicle parameters and un-model dynamics. Thus, it is required to design a nonlinear robust control law for ABS to overcome these problems. In this paper, a novel robust prediction-based controller for ABS is proposed that guarantees the stability against uncertainties. An optimal control law is firstly designed for ABS using nonlinear predictive method. Then, the unknown uncertainties are adaptively approximated utilizing a radial basis function neural network (RBFNN). The Lyapunov approach is employed to develop an update control law to determine the network weights. Finally, some simulations are conducted to examine the performance of the proposed control system for tracking the reference wheel slip in the presence of uncertainties in different maneuvers. Also, the performance of the proposed controller is compared with the conventional sliding mode controller (SMC) through simulation results.Graphical abstractGraphical abstract for this article
       
  • Key generation for plain text in stream cipher via bi-objective
           evolutionary computing
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Gutha Jaya Krishna, Vadlamani Ravi, S. Nagesh Bhattu Evolutionary algorithms are widely used to solve a wide variety of continuous, discrete and combinatorial optimization problems. Evolutionary multi-objective optimization problems seek Pareto front in order to negotiate the trade-off amongst various objective functions present in the problem. Much of the literature on cryptography focuses on making the inference problem harder, for securing the content. In this paper, we developed key generation algorithms using Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) in the bi-objective optimization framework and Improved Modified Harmony Search + Differential Evolution (IMHS+DE), Differential Evolution (DE) and Improved Modified Harmony Search (IMHS), in the single objective optimization framework. For encoding the keystream thus generated as well as the plain text we employed the Mutated Huffman Tree Coding algorithm. In the next phase, we encrypted the encoded keystream as well as the encoded plain text in order to generate the cipher text. We then decrypted the cipher text using the encoded key stream followed by the decoding of the deciphered text using the code tables. Following the literature, we generated random texts of varying lengths and code table sizes in order to demonstrate the effectiveness of our proposed method. The proposed algorithms are compared with the extant methods. In the case of bi-objective optimization set up, we also plotted the empirical attainment function (EAF) surface to summarize the effectiveness of the NSGA-II based key generation algorithm. Of particular significance is the highest entropy value yielded by the NSGA-II based algorithm, which in turn indicates the strength of the key generated by the NSGA-II.
       
  • Manufacturing service composition model based on synergy effect: A social
           network analysis approach
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Minglun Ren, Lei Ren, Hemant Jain Service social network is an umbrella term used to describe several interaction and collaboration phenomena that are shaping the future of how services are provided on the cloud manufacturing platform. Social relationship plays an important role when services are orchestrated with each other to build manufacturing business process, a role which has not been adequately investigated in previous research. The existing manufacturing service composition methods consider functional qualifications and Quality of Service (QoS) as major competitiveness factors. It is difficult to adopt them to situations where synergy effect is required and social relationships have significant impact on ensuring effective resources, information and knowledge hand-off in the complex task process. Focusing on the social collaboration feature of manufacturing services, a service composition method based on synergy effect is proposed. According to the data of service interaction and cooperation on the cloud platform, we extract and describe service social network and five kinds of relationships, namely interactive transaction, co-community, physical distance, resource-related, social similarity relationship. Based on the calculation of these relationships strength, the service synergy network is derived through the weighted aggregation. A service selection model that maximizes the overall synergy effect based on collaboration requirement is presented. The validity and advantages of our model and algorithm is validated through simulation experiment of intelligent automobile cloud manufacturing. The results show that our approach is not only efficient, but also finds better service scheme in line with the actual manufacturing scenario.
       
  • Natural neighborhood graph-based instance reduction algorithm without
           parameters
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Lijun Yang, Qingsheng Zhu, Jinlong Huang, Dongdong Cheng, Quanwang Wu, Xiaolu Hong Instance-based learning often uses all instances in a training set to construct inference structures. The large number of instances and attributes may lead to high storage requirements and low search efficiency. Instance reduction is developed to address these issues by removing irrelevant instances and noises from the training set. However, existing reduction techniques still have the shortcomings of parameter dependency and relatively low accuracy and reduction rates. In this study, we present a natural neighborhood graph-based instance reduction algorithm, namely, NNGIR. A natural neighborhood graph (NaNG) is automatically constructed by the natural neighbor search algorithm. This graph can provide a compact description of the nearest neighbor relation over pairs of instances. NNGIR applies NaNG to divide the original training set into noisy, border and internal instances. Next, the algorithm obtains a reduced set by eliminating noisy and redundant points. NNGIR has three main advantages: (1) it is a non-parameter instance reduction algorithm due to the use of natural neighborhood graphs; (2) it strongly increases the reduction rate while maintaining or even improving the predication accuracy; (3) its fluctuation of reduction rates for different types of data sets is notably small. The efficiency of NNGIR is supported by the positive results from the experiments conducted on both synthetic and real data sets.
       
  • Predicting the climbing rate of slip formwork systems using linear
           biogeography-based programming
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Emadaldin Mohammadi Golafshani, Siamak Talatahari Nowadays, it is undeniable necessity to select a fast and appropriate method for construction of high rise concrete structures. Slip formwork technology, as an automatic formwork system, has many advantages for high rise buildings and can reduce the construction time and costs. However, the climbing rate of slip formwork systems is a challenging task and depends on different factors. In this paper, the potential factors in calculating the climbing rate were identified. Then, a comprehensive database including 81 slip formwork projects in Iran was gathered. Afterwards, a symbolic regression method called linear biogeography-based programming was introduced and applied for extracting a formula that obtains a good climbing rate of slip formwork systems. For evaluating the performance of the proposed method, artificial neural network and linear genetic programming were utilized as well. The results show that the proposed formulation has good agreement with actual values of climbing rate of slip forming systems with low error and complexity and find it to be quite confident. Moreover, weather conditions criteria is known as the most effective parameter in climbing the rate of slip formwork systems based on the performed sensitivity analysis.
       
  • An integrated production and procurement design for a multi-period
           multi-product manufacturing system with machine assignment and warehouse
           constraint
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Shaghayegh Vaziri, Arash Zaretalab, Mohammad Esmaeili, S.T.A. Niaki Economic production and on-time ordering are among the most important topics related to production and inventory control issues. An economical production needs a comprehensive and precise planning to be implemented in all production stages. To have a controlled and comprehensive planning system, economic order quantity (EOQ) or economic production quantity (EPQ) models are usually used in various production-inventory environments to minimize costs, avoid delays in orders, and achieve high performance. To meet the demands, sometimes a multiperiod production-inventory planning that involves several products requires outsourcing. In this paper, a production-procurement plan that integrates EOQ with EPQ is proposed for a multiperiod multi-product production-inventory system with a limited warehouse capacity. The design aims to determine the optimal values of decision variables including the production quantity, the order quantity, inventory, shortages, and machine assignments to produce the products in a planning horizon. The problem is formulated as a non-linear mixed-integer programming model. As the problem becomes NP-Hard, a genetic algorithm (GA) is utilized to find a near-optimal solution. In addition, a random search (RS) algorithm is applied to validate the results obtained and to justify the efficiency of the GA. Since there is no benchmark available in the literature, some randomly generated numerical examples are solved and a sensitivity analysis is performed in order to demonstrate the effectiveness of the proposed methodology and the solution algorithm.Graphical abstractGraphical abstract for this article
       
  • Solution method for a non-homogeneous fuzzy linear system of differential
           equations
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Nizami A. Gasilov, Afet Golayoglu Fatullayev, Şahin Emrah Amrahov In this paper, we propose a new solution method to non-homogeneous fuzzy linear system of differential equations. The coefficients of the considered system are crisp while forcing functions and initial values are fuzzy. We assume each forcing function be in a special form, which we call as triangular fuzzy function and which represents a fuzzy bunch (set) of real functions. We construct a solution as a fuzzy set of real vector-functions, not as a vector of fuzzy-valued functions, as usual.We interpret the given fuzzy initial value problem (fuzzy IVP) as a set of crisp (classical) IVPs. Such a crisp IVP is obtained if we take a forcing function from each of fuzzy bunches and an initial value from each of fuzzy intervals. The solution of the crisp IVP is a vector-function. We define it to be an element of the fuzzy solution set and assign a membership degree which is the lowest value among membership degrees of taken forcing functions and initial values in the corresponding fuzzy sets.We explain our approach and solution method with the help of several illustrative examples. We show the advantage of our method over the differential inclusions method and its applicability to real-world problems.Graphical abstractGraphical abstract for this article
       
  • Decomposition genetic programming: An extensive evaluation on rainfall
           prediction in the context of weather derivatives
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Sam Cramer, Michael Kampouridis, Alex A. Freitas Regression problems provide some of the most challenging research opportunities in the area of machine learning, where the predictions of some target variables are critical to a specific application. Rainfall is a prime example, as it exhibits unique characteristics of high volatility and chaotic patterns that do not exist in other time series data. Moreover, rainfall is essential for applications that surround financial securities, such as rainfall derivatives. This paper extensively evaluates a novel algorithm called Decomposition Genetic Programming (DGP), which is an algorithm that decomposes the problem of rainfall into subproblems. Decomposition allows the GP to focus on each subproblem, before combining back into the full problem. The GP does this by having a separate regression equation for each subproblem, based on the level of rainfall. As we turn our attention to subproblems, this reduces the difficulty when dealing with data sets with high volatility and extreme rainfall values, since these values can be focused on independently. We extensively evaluate our algorithm on 42 cities from Europe and the USA, and compare its performance to the current state-of-the-art (Markov chain extended with rainfall prediction), and six other popular machine learning algorithms (Genetic Programming without decomposition, Support Vector Regression, Radial Basis Neural Networks, M5 Rules, M5 Model trees, and k-Nearest Neighbours). Results show that the DGP is able to consistently and significantly outperform all other algorithms. Lastly, another contribution of this work is to discuss the effect that DGP has had on the coverage of the rainfall predictions and whether it shows robust performance across different climates.
       
  • Hosting capacity of low-voltage grids for distributed generation:
           Classification by means of machine learning techniques
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Sebastian Breker, Jan Rentmeister, Bernhard Sick, Martin Braun A high amount of installed distributed generators (DG) in low-voltage grids, e.g. photovoltaic generators (PV), may cause serious problems due to overloading of electrical equipment and violation of voltage limits. The assessment of low-voltage grids regarding their hosting capacity for the installation of DG is a difficult task, because grid structures may be diverse and complex. In this article, we classify grids by means of machine learning techniques, in particular support vector machines (SVM). SVM learn to assess grids by means of sample data, that is, grids represented by characteristic features that were assessed by human domain experts (i.e., distribution system operators (DSO) staff). We show that this approach can significantly better reflect domain expert assessments compared to a technique we proposed earlier which is based on a stochastic load flow simulation procedure and a subsequent parametric stochastic model estimation. One key result of this article is that SVM with grid based features significantly outperform SVM using features from load flow simulations regarding the classification accuracy if both are trained with data that were assessed (labeled) by DSO staff. Experiments are based on data for 300 real rural and suburban low-voltage grids.
       
  • An Iterated Tabu Search Metaheuristic for the Regenerator Location Problem
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Xiangyong Li, Chongfang Yue, Y.P. Aneja, Si Chen, Youzhi Cui We study the regenerator location problem (RLP) in optical networks where optical signal can only travel a maximum distance before its quality deteriorates, requiring regenerations by installing regenerators at network nodes. The RLP is to determine the minimum number of nodes for regenerator placement such that for each node pair there is a path on which no subpath without internal regenerators has a length greater than a given maximum distance. The RLP is NP-complete. We propose an iterated metaheuristic that iteratively invokes a regenerator-reducing procedure and tabu search to collaboratively solve the RLP. We compare our approach with other heuristics using benchmark and new RLP instances. Because of the equivalence among RLP, maximum leaf spanning tree problem (MLSTP), minimum connected dominating set problem (MCDSP) and minimum 1-connected 1-dominating set problem (1-1-DSP), we also compare our approach with other algorithms over MLSTP/MCDSP/1-1-DSP instances. Results demonstrate the effectiveness and efficiency of our method.
       
  • Multi-objective transmission congestion management considering demand
           response programs and generation rescheduling
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Fariborz Zaeim-Kohan, Hadi Razmi, Hasan Doagou-Mojarrad With the implementation of restructuring and deregulation in the electricity industry, efficient and economic operations of power systems have become remarkably significant. Meanwhile, congestion in transmission lines is one of the major problems in power system operation. Therefore, in this paper, the multi-objective particle swarm optimization (MOPSO) method has been used for transmission congestion management considering demand response programs (DRPs) and generation rescheduling. Total operation/DR cost, emission and increasing the loading of transmission lines are the objective functions of this problem. Using DRPs increases the operator power of choice with regard to the participation of small consumers in reducing the demand. The method already mentioned has been tested on two test systems (IEEE 30-bus and IEEE 118-bus test systems). The results of the evaluation in the form of different scenarios show that DRPs reduce the power system transmission lines congestion, allowing the use of the transmission lines with less loading capacity. Also, in some instances, without using DRPs, solving the transmission congestion management problem is impossible.
       
  • New approaches to determine age and gender in image processing techniques
           using multilayer perceptron neural network
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Emre Avuçlu, Fatih Başçiftçi As a result of some events (disasters, inheritance, disappearances etc.), age and gender determination can be vital for people. Forensic medical institutions make the determination of age by examining the structures such as teeth and bones. Procedures for forensic science are currently estimated manually according to certain morphological findings on the tooth. In this study, 1313 panoramic dental images were used automatically to estimate age. Image preprocessing is applied on these images. Trapezoidal teeth are corrected in the coordinate plane to obtain more accurate and standard results. In the study, the correction process is done with original and novel developed algorithm. Dental images are automatically and dynamically segmented and feature vectors are created by extracting their features. The generated feature vectors are dynamic and presented as inputs to the Multilayer Perceptron Neural Network. Depending on the request, the number of input count reduction process can be performed. In this study, age and gender were determined from dental x-ray images with novel and originally developed algorithm. The application is written in C # programming language. In some tooth groups, the highest classification rate of 100% and age determination with 0 error were performed. With this study, age determination in forensic science will be more accurate.
       
  • Single machine scheduling with two-agent for total weighted completion
           time objectives
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Hongwei Li, Yuvraj Gajpal, C.R. Bector This paper considers two-agent scheduling problem with a single machine which is responsible for processing jobs from two agents. The objective is to minimize the objective function of one agent, subject to an upper bound on the objective function of the other agent. The objectives considered in this paper are, (1) the minimization of total completion time and (2) the minimization of total weighted completion time. To solve these problems, one heuristic and an Ant Colony Optimization algorithm are proposed. The heuristic suggested in the paper are motivated by the Weighted Shortest Processing Time first (WSPT) rule. A numerical experiment is performed on randomly generated problem instances. The performance of the algorithm is evaluated by comparing it with the lower bound value of all three problems considered in the present paper.
       
  • Prediction of welding residual stresses using machine learning: Comparison
           between neural networks and neuro-fuzzy systems
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): J. Mathew, J. Griffin, M. Alamaniotis, S. Kanarachos, M.E. Fitzpatrick Safe and reliable operation of power plants invariably relies on the structural integrity assessments of pressure vessels and piping systems. Welded joints are a potential source of failure, because of the combination of the variation in mechanical properties and the residual stresses associated with the thermomechanical cycles experienced by the material during welding. This paper presents comparative studies between methods based on artificial neural networks (ANN) and fuzzy neural networks (FNN) for predicting residual stresses induced by welding. The performance of neural network and neuro-fuzzy systems are compared based on statistical indicators, scatter plots and several case studies. Results show that the neuro-fuzzy systems optimised using a hybrid technique can perform slightly better than a neural network trained using Levenberg-Marquardt algorithm, primarily because of the inability of the ANN approach to provide conservative estimates of residual stress profiles. Specifically, the prediction accuracy of the neuro-fuzzy systems trained using the hybrid technique is better for the axial residual stress component, with root mean square error (RMSE), absolute fraction of variance (R2) and mean absolute percentage error (MAPE) error of 0.1264, 0.9102 and 22.9442 respectively using the test data. Furthermore, this study demonstrates the potential benefits of implementing neuro-fuzzy systems in predicting residual stresses for use in structural integrity assessment of power plant components.
       
  • A reference direction and entropy based evolutionary algorithm for
           many-objective optimization
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Miao Zhang, Huiqi Li Evolutionary algorithms have been successfully applied in dealing with multi-objective optimization problems with two or three objectives. However, when solving the problem with more than 3 objectives (also called as many-objective optimizing problem), most multi-objective evolutionary algorithms perform poorly due to the ineffectiveness of Pareto dominance relationship in a high-dimensional space, and the diversity maintenance mechanism usually leads the population to be far from the true Pareto front. In this paper, a novel approach is proposed to handle the challenges in the many-objective optimization problem. Firstly, a grid-based approach is adopted to eliminate dominance resistant solutions which are non-dominated solutions with excellent diversity while incur the dominance resistance and lead the population far from the true Pareto front. Secondly, a new diversity maintenance mechanism based on reference directions is proposed, which not only enhances the diversity but also takes the convergence into consideration. For a domination- relationship based MOEA hardly has enough convergence capability for a high-dimension optimizing problem, our approach embeds convergence capability into the diversity maintenance process, and balances the convergence and diversity capability according to evolutionary states and Pareto entropy. The proposed algorithm is evaluated on a number of standard benchmark functions, i.e., DTLZ1-7 and WFG1-9 with 3-, 4-, 5-, 8-, 10-objective and compared with 5 state-of-the-art Many-Objective Evolutionary Algorithms (MaOEAs). Experimental results demonstrate the proposed algorithm’s competitiveness in both convergence and diversity in solving Many-Objective Optimization Problems.Graphical abstractGeneral flowchart of the proposed Reference Direction and Entropy based Many-Objective Evolutionary Algorithm (RDE-MaOEA).Graphical abstract for this article
       
  • Determining the optimal temperature parameter for Softmax function in
           reinforcement learning
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Yu-Lin He, Xiao-Liang Zhang, Wei Ao, Joshua Zhexue Huang The temperature parameter plays an important role in the action selection based on Softmax function which is used to transform an original vector into a probability vector. An efficient method named Opti-Softmax to determine the optimal temperature parameter for Softmax function in reinforcement learning is developed in this paper. Firstly, a new evaluation function is designed to measure the effectiveness of temperature parameter by considering the information-loss of transformation and the diversity among probability vector elements. Secondly, an iterative updating rule is derived to determine the optimal temperature parameter by calculating the minimum of evaluation function. Finally, the experimental results on the synthetic data and D-armed bandit problems demonstrate the feasibility and effectiveness of Opti-Softmax method.Graphical abstractGraphical abstract for this article
       
  • Algorithms for the multi-objective vehicle routing problem with hard time
           windows and stochastic travel time and service time
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Douglas M. Miranda, Juergen Branke, Samuel V. Conceição This paper introduces a multi-objective vehicle routing problem with hard time windows and stochastic travel and service times. This problem has two practical objectives: minimizing the operational costs, and maximizing the service level. These objectives are usually conflicting. Thus, we follow a multi-objective approach, aiming to compute a set of Pareto-optimal alternatives with different trade-offs for a decision maker to choose from. We propose two algorithms (a Multi-Objective Memetic Algorithm and a Multi-Objective Iterated Local Search) and compare them to an evolutionary multi-objective optimizer from the literature. We also propose a modified statistical method for the service level calculation. Experiments based on an adapted version of the 56 Solomon instances demonstrate the effectiveness of the proposed algorithms.
       
  • A survey on deep learning techniques for image and video semantic
           segmentation
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Alberto Garcia-Garcia, Sergio Orts-Escolano, Sergiu Oprea, Victor Villena-Martinez, Pablo Martinez-Gonzalez, Jose Garcia-Rodriguez Image semantic segmentation is more and more being of interest for computer vision and machine learning researchers. Many applications on the rise need accurate and efficient segmentation mechanisms: autonomous driving, indoor navigation, and even virtual or augmented reality systems to name a few. This demand coincides with the rise of deep learning approaches in almost every field or application target related to computer vision, including semantic segmentation or scene understanding. This paper provides a review on deep learning methods for semantic segmentation applied to various application areas. Firstly, we formulate the semantic segmentation problem and define the terminology of this field as well as interesting background concepts. Next, the main datasets and challenges are exposed to help researchers decide which are the ones that best suit their needs and goals. Then, existing methods are reviewed, highlighting their contributions and their significance in the field. We also devote a part of the paper to review common loss functions and error metrics for this problem. Finally, quantitative results are given for the described methods and the datasets in which they were evaluated, following up with a discussion of the results. At last, we point out a set of promising future works and draw our own conclusions about the state of the art of semantic segmentation using deep learning techniques.
       
  • An analysis of control parameters of MOEA/D under two different
           optimization scenarios
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Ryoji Tanabe, Hisao Ishibuchi An unbounded external archive (UEA), which stores all nondominated solutions found during the search process, is frequently used to evaluate the performance of multi-objective evolutionary algorithms (MOEAs) in recent studies. A recent benchmarking study also shows that decomposition-based MOEA (MOEA/D) is competitive with state-of-the-art MOEAs when the UEA is incorporated into MOEA/D. However, a parameter study of MOEA/D using the UEA has not yet been performed. Thus, it is unclear how control parameter settings influence the performance of MOEA/D with the UEA. In this paper, we present an analysis of control parameters of MOEA/D under two performance evaluation scenarios. One is a final population scenario where the performance assessment of MOEAs is performed based on all nondominated solutions in the final population, and the other is a reduced UEA scenario where it is based on a pre-specified number of selected nondominated solutions from the UEA. Control parameters of MOEA/D investigated in this paper include the population size, scalarizing functions, and the penalty parameter of the penalty-based boundary intersection (PBI) function. Experimental results indicate that suitable settings of the three control parameters significantly depend on the choice of an optimization scenario. We also analyze the reason why the best parameter setting is totally different for each scenario.
       
  • Optimal resource allocation using reinforcement learning for IoT
           content-centric services
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Keke Gai, Meikang Qiu The exponential growing rate of the networking technologies has led to a dramatical large scope of the connected computing environment. Internet-of-Things (IoT) is considered an alternative for obtaining high performance by the enhanced capabilities in system controls, resource allocations, data exchanges, and flexible adoptions. However, current IoT is encountering the bottleneck of the resource allocation due to the mismatching networking service quality and complicated service offering environments. This paper concentrates on the issue of resource allocations in IoT and utilizes the satisfactory level of Quality of Experience (QoE) to achieve intelligent content-centric services. A novel approach is proposed by this work, which utilizes the mechanism of Reinforcement Learning (RL) to obtain high accurate QoE in resource allocations. Two RL-based algorithms have been proposed for cost mapping tables creations and optimal resource allocations. Our experiment evaluations have assessed the efficiency of implementing the proposed approach.
       
  • Ensemble selector for attribute reduction
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Xibei Yang, Yiyu Yao Through abstracting commonness from the existing heuristic algorithms, control strategies bring us higher level understandings of building reducts in rough set theory. To further improve the performances and strengthen the applicabilities of the addition control strategy, an ensemble selector is introduced into such framework. This ensemble selector is realized through using a set of the fitness functions which may be constructed by homogenous or heterogeneous evaluations of the candidate attributes. Based on the neighborhood rough set model, the experimental results tell us that by comparing the traditional addition control strategy, ensemble selector is effective in improving the stabilities of the reducts, the stabilities of the classification results and the AUC values from the viewpoints of KNN and SVM classifiers. This study suggests new trends for considering attribute reduction problems and provides guidelines for designing new algorithms in rough set theory.Graphical abstractGraphical abstract for this article
       
  • Recursive identification of multiple-input single-output fractional-order
           Hammerstein model with time delay
    • Abstract: Publication date: September 2018Source: Applied Soft Computing, Volume 70Author(s): Mohammad Jahani Moghaddam, Hamed Mojallali, Mohammad Teshnehlab This paper deals with identification of the continuous-time Hammerstein systems with time delay using Genetic Algorithm (GA) combined with the Recursive Least-Squares (RLS) method. This model consists of the Radial Basis Function Neural Network (RBFNN) as its nonlinear static part and fractional order transfer function as its dynamic linear part. The fractional orders are identified by GA with an innovative strategy called Modified Genetic Algorithm (MGA). The main innovative idea is the selection and transferring the best characteristics or properties to the next generation. On the other hand, the centers and widths and the weighting parameters of the RBFNN and the transfer function coefficients of the linear dynamic part are updated by the RLS method. Simulation results are applied to illustrate the proposed method accuracy.
       
  • Evaluating the Topological Quality of Watermarked Vector Maps
    • Abstract: Publication date: Available online 10 July 2018Source: Applied Soft ComputingAuthor(s): Ahmed Abubahia, Mihaela Cocea The pervasive use and exchange of digital content led to increased efforts in the research community for efficient approaches to protect intellectual property rights. While watermarking techniques have been used extensively for raster image format, watermarking approaches for the vector map format have been largely inspired from existing image watermarking techniques, without due consideration to the suitability of these techniques for this different data format. A key requirement of any watermarking approach of vector data is the preservation of the topological quality of the watermarked data. This is sometimes referred to as the invisibility of the watermark. For vector map data, the topological quality and invisibility are fundamentally different, but currently submerged into one and measured with error metrics borrowed from image watermarking, such as Root Mean Squared Error (RMSE) and Peak Signal to Noise Ratio (PSNR). Over the last 10 year, the research community on watermarking vector map data has repeatedly posed that error metrics alone are not appropriate for the evaluation of watermarked vector map topological quality. In this paper, a metric for measuring topological quality by measuring topological distortions is proposed based on topological properties of polygon-based vector maps. To evaluate the proposed metric, experiments with controlled watermarking capacity (i.e. how much is embedded) were run on maps of various sizes, using two popular embedding approaches, i.e. coordinate-based and distance-based embedding. The results indicate that the metrics allow comparisons between watermarked maps of different sizes and of different watermark sizes, and, thus, can be used to assess the quality of watermarked vector maps. The advantages and limitations of the proposed metric are discussed and further research directions are highlighted towards an agreed metric by the research community.
       
  • Creating Hard-to-Solve Instances of Travelling Salesman Problem
    • Abstract: Publication date: Available online 10 July 2018Source: Applied Soft ComputingAuthor(s): Miguel Cárdenas-Montes Travelling Salesman Problem is a classical combinatorial optimization problem. Instances of this problem have been used as benchmark for testing the performance of many proposals of discrete optimizer algorithms. However, the hardness or the difficulty degree to solve the instances has not been usually addressed. In the past the evaluation of the difficulty of the instances has required to obtain a high-quality solution, ideally the optimal one. However, this type of strategy burdens the evaluation with large processing times. In this work, diverse indirect measures for evaluating the hardness to solve instances of the Travelling Salesman Problem are proposed. These evaluations are inferred from the spatial attributes of previously evaluated instances, and later correlated with the hardness of the instances. Finally, where a significant correlation is found, a linear model is built and linked to a genetic algorithm. As a consequence of this work, mechanisms for hardening instances of Travelling Salesman Problem are implemented.
       
  • Adaptive Predator-Prey Optimization for Tuning of Infinite Horizon LQR
           applied to Vehicle Suspension System
    • Abstract: Publication date: Available online 9 July 2018Source: Applied Soft ComputingAuthor(s): Rashmi Ranjan Das, Vinodh Kumar Elumalai, Raaja Ganapathy Subramanian, Kadiyam Venkata Ashok Kumar This paper puts forward an adaptive predator-prey optimization algorithm to solve the weight selection problem of linear quadratic control applied for vibration control of vehicle suspension system. The proposed technique addresses the two key issues of PSO, namely (a) the premature convergence of the particles, and (b) the imbalance between exploration and exploitation of the particles in finding the global optimum. The main principle behind this optimization algorithm is that the inertia weight is adaptively updated based on the success rate of the particles to increase the convergence, and the predator-prey strategy is reinforced to avoid the particles getting trapped in a local minimum thereby, guaranteeing convergence of the particles towards the global optimal solution. The convergence of the particles towards the global minimum is guaranteed on the basis of a passivity argument. Moreover, the strength of this new adaptive optimization technique to tune the gains of linear quadratic regulator is validated experimentally on a laboratory scale active vehicle suspension system for improved ride comfort and passenger safety.
       
  • A robust fuzzy least squares twin support vector machine for class
           imbalance learning
    • Abstract: Publication date: Available online 9 July 2018Source: Applied Soft ComputingAuthor(s): B. Richhariya, M. Tanveer Twin support vector machine is one of the most prominent techniques for classification problems. It has been applied in various real world applications due to its less computational complexity. In most of the applications on classification, there is imbalance in the number of samples of the classes which leads to incorrect classification of the data points of the minority class. Further, while dealing with imbalanced data, noise poses a major challenge in various applications. To resolve these problems, in this paper we propose a robust fuzzy least squares twin support vector machine for class imbalance learning termed as RFLSTSVM-CIL using 2-norm of the slack variables which makes the optimization problem strongly convex. In order to reduce the effect of outliers, we propose a novel fuzzy membership function specifically for class imbalance problems. Our proposed function gives the appropriate weights to the datasets and also incorporates the knowledge about the imbalance ratio of the data. In our proposed model, a pair of system of linear equations is solved instead of solving a quadratic programming problem (QPP) which makes our model efficient in terms of computation complexity. To check the performance of our proposed approach, several numerical experiments are performed on synthetic and real world benchmark datasets. Our proposed model RFLSTSVM-CIL has shown better generalization performance in comparison to the existing methods in terms of AUC and training time.
       
  • The single machine scheduling problem with sequence-dependent setup times
           and a learning effect on processing times
    • Abstract: Publication date: Available online 9 July 2018Source: Applied Soft ComputingAuthor(s): Settar Mustu (Muştu), Tamer ErenABSTRACTFor the past few years, learning effect has been receiving wide attention in the scheduling problems. There are many studies that take into account the learning effect with several structural constraints and some specific conditions of machine scheduling problems. However, very few studies have explored the scheduling problem including learning effect and sequence-dependent setup times simultaneously, although both are found in the scheduling literature. Therefore, this study addresses an NP-Hard problem of minimizing the total tardiness on a single machine with sequence-dependent setup times and a position-based learning effect on processing times. Several dominance properties are verified and a lower bound is provided to develop a branch and bound algorithm for solving the problem. In addition, four heuristic algorithms (two genetic algorithms and two variable neighborhood search algorithms) are developed in order to obtain near optimal solutions in reasonable time limits. Furthermore, some computational experiments are performed on randomly generated test problems to evaluate the performance of the algorithms.
       
  • Memetic fuzzy clustering protocol for wireless sensor networks: Shuffled
           frog leaping algorithm
    • Abstract: Publication date: Available online 9 July 2018Source: Applied Soft ComputingAuthor(s): Fakhrosadat Fanian, Marjan Kuchaki Rafsanjani The importance of Wireless Sensor Networks (WSNs) has increased owing to their extensive advances, allowing integration of nano-sensors, wireless networks, and smart software. The main challenge in WSNs is the fast sensor energy discharging. One of the most effective approaches to deal with this issue is clustering and selecting appropriate cluster heads. This study presents a Fuzzy Shuffled Frog Leaping Algorithm (FSFLA), which employs the memetic Shuffled Frog Leaping Algorithm (SFLA) to optimize the Mamdani fuzzy rule-base table based on the application specifications. In addition to automatically adjusting the if-then fuzzy rules, this protocol optimizes five controllable parameters associated with the inputs to the fuzzy system in an offline procedure prior to launching the network. The inputs of the fuzzy systems include remaining energy, distance from the base station, the number of neighboring nodes, and node histories. The proposed clustering algorithm can be adjusted according to the application due to having two determined thresholds for turning candidate nodes to final cluster heads. The proposed FSFLA protocol is compared to various protocols such as LEACH, LEACH-DT, SIF, and ASLPR, in terms of the number of the network lifetime, remaining energy, the number of packets successfully received at the base station and intra-cluster distance. The simulation results indicate that the proposed FSFLA clustering protocol, which is implemented in two versions, significantly outperforms other protocols in all scenarios.
       
  • Hybrid Metaheuristics and Multi-agent Systems for solving Optimization
           Problems: a review of frameworks and a comparative analysis
    • Abstract: Publication date: Available online 7 July 2018Source: Applied Soft ComputingAuthor(s): Maria Amélia Lopes Silva, Sérgio Ricardo de Souza, Marcone Jamilson Freitas Souza, Moacir Felizardo de França Filho This article presents a review and a comparative analysis between frameworks for solving optimization problems using metaheuristics. The aim is to identify both the desirable characteristics as the existing gaps in the current state of the art, with a special focus on the use of multi-agent structures in the development of hybrid metaheuristics. A literature review of existing frameworks is introduced, with emphasis on their characteristics of hybridization, cooperation, and parallelism, particularly focusing on issues related to the use of multi-agents. For the comparative analysis, a set of twenty-two characteristics was listed, according to four categories: basics, advanced, multi-agent approach and support to the optimization process. Strategies used in hybridization, such as parallelism, cooperation, decomposition of the search space, hyper-heuristic and multi-agent systems are assessed in respect to their use in the various analyzed frameworks. Specific features of multi-agent systems, such as learning and interaction between agents, are also analyzed. The comparative analysis shows that the hybridization is not a strong feature in existing frameworks. On the other hand, proposals using multi-agent systems stand out in the implementation of hybrid methods, as they allow the interaction between metaheuristics. It also notes that the concept of hyper-heuristic is little explored by the analyzed frameworks, as well as there is a lack of tools that offer support to the optimization process, such as statistical analysis, self-tuning of parameters and graphical interfaces. Based on the presented analysis, it can be said that there are important gaps to be filled in the development of Frameworks for Optimization using metaheuristics, which open important possibilities for future works, particularly by implementing the approach of multi-agent systems.Graphical abstractGraphical abstract for this article
       
  • A Multi-tier Immuno-inspired Framework for Heterogeneous Mobile Robotic
           Systems
    • Abstract: Publication date: Available online 7 July 2018Source: Applied Soft ComputingAuthor(s): Ali Raza, Benito R. Fernández A new immuno-inspired framework, combining innate and adaptive components of immunity, is presented for heterogeneous mobile robots. The framework is aimed to enable the robots to perform different tasks, ranging from solving complex mazes to search-and-rescue in unstructured scenarios, albeit the differences in robot-configurations. The framework operates in a three-tier subsuming structure: the lowest rung offers multiple reactive functions of innate immunity, the middle tier provides a T-cell-inspired reinforcement mechanism, and the top level invokes adaptive functions of clonal selection and idiotypic networks. The arbitration among these tiers is based on robots’ performance which is translated in terms of inflammation and cell-maturity. It resultantly helps to regulate the immuno-responses and maintain a homeostasis in the system. In order to handle multiple sensory configurations, a cross tier representation scheme is also devised. The framework is validated through extensive simulations in various unstructured scenarios, performing multiple tasks with different robot-configurations.Graphical abstractGraphical abstract for this article
       
  • Multi-item 4D-TPs under budget constraint using rough interval
    • Abstract: Publication date: Available online 6 July 2018Source: Applied Soft ComputingAuthor(s): Sukhendu Bera, Pravash Kumar Giri, Dipak Kumar Jana, Kajla Basu, Manoranjan Maiti Multi-item four dimensional transportation problems (MI4D-TPs) are formulated with budget constraint under rough and fuzzy interval environments. Here unit selling prices, unit purchasing costs, unit transportation costs, fixed charges, sources, demands, capacities of conveyances and total budget are represented by rough intervals for rough model and fuzzy numbers for fuzzy model. In this paper, four dimensional transportation problems (4D-TPs) under rough interval environment are transformed to deterministic ones by three different methods - (i) upper and lower approximation intervals, (ii) Expected Value Technique and (iii) rough chance-constraint programming technique. The fuzzy 4D-TP is made deterministic using possibility measures. Reduced problems are solved by generalized reduced gradient (GRG) method using LINGO 12.0 software. The models are illustrated with numerical examples and natures of the solutions i.e. complete and rather satisfactory solutions are presented. As particular cases, the results of solid transportation problems (3D-TPs) and conventional transportation problems (2D-TPs) are obtained. Importance of consideration of routes in MI4D-TPs is pointed out. As a particular case, the results of an earlier investigation are derived. Sensitivity of profit in the rough model through Expected Value Technique is presented against the arithmetic combination parameter. Percentage of increase of profit against the expectation parameter is furnished.
       
  • An outranking approach using a risk attitudinal assignment model involving
           Pythagorean fuzzy information and its application to financial decision
           making
    • Abstract: Publication date: Available online 5 July 2018Source: Applied Soft ComputingAuthor(s): Ting-Yu ChenABSTRACTThe aim of this paper is to develop a novel outranking method using a risk attitudinal assignment model under complex uncertainty based on Pythagorean fuzzy (PF) sets and interval-valued Pythagorean fuzzy (IVPF) sets. Additionally, this paper applies the proposed methodology to address financial decision-making problems to investigate its applicability and effectiveness in the real world. These developed methods and techniques can provide a new viewpoint for capturing the amount of information conveyed by PF and IVPF values and can fully take into account the decision maker’s attitude towards risk-taking in the face of high uncertainty. Considering the diverse attitudes toward risk neutrality, risk seeking, and risk aversion, this paper introduces novel measures of risk attitude-based score functions within the PF/IVPF environments and investigates their desirable and useful properties. These properties provide a means of establishing linear orders and admissible orders in the PF/IVPF contexts. This means that the proposed measures can overcome some drawbacks and ambiguities of the previous techniques through score and accuracy functions and can address incomparable PF/IVPF information more effectively. As an application of the risk attitude-based score functions, this paper develops novel risk attitudinal assignment models to establish a useful outranking approach for solving multiple criteria decision-making problems. Two algorithms for PF and IVPF settings are developed based on the concepts of a precedence frequency matrix and a precedence contribution matrix for conducting multiple criteria evaluation and the ranking of alternatives. A financing decision on aggressive/conservative policies of working capital management is presented to demonstrate the applicability of the proposed outranking approach in real situations. Moreover, a comparison to the technique of risk attitudinal ranking methods is investigated to validate the advantages of the proposed methodology. Furthermore, a comparative analysis with a newly developed outranking method, the IVPF-ELECTRE (for the elimination and choice translating reality), is conducted via an application to the investment problem of software development projects. Compared with these benchmark approaches, the proposed methods can produce a more reasonable and persuasive result for ranking the order relationships of alternatives in the highly uncertain context. The practical studies and comparative discussions demonstrate an excellent performance of the developed methodology that is effective and flexible enough to accommodate more-complex decision-making environments.
       
  • A discrete water cycle algorithm for solving the symmetric and asymmetric
           traveling salesman problem
    • Abstract: Publication date: Available online 4 July 2018Source: Applied Soft ComputingAuthor(s): Eneko Osaba, Javier Del Ser, Ali Sadollah, Miren Nekane Bilbao, David Camacho The water cycle algorithm (WCA) is a nature-inspired meta-heuristic recently contributed to the community in 2012, which finds its motivation in the natural surface runoff phase in water cycle process and on how streams and rivers flow into the sea. This method has been so far successfully applied to many engineering applications, spread over a wide variety of application fields. In this paper an enhanced discrete version of the WCA (coined as DWCA) is proposed for solving the Symmetric and Asymmetric Traveling Salesman Problem. Aimed at proving that the developed approach is a promising approximation method for solving this family of optimization problems, the designed solver has been tested over 33 problem datasets, comparing the obtained outcomes with the ones got by six different algorithmic counterparts from the related literature: genetic algorithm, island-based genetic algorithm, evolutionary simulated annealing, bat algorithm, firefly algorithm and imperialist competitive algorithm. Furthermore, the statistical significance of the performance gaps found in this benchmark is validated based on the results from non-parametric tests, not only in terms of optimality but also in regards to convergence speed. We conclude that the proposed DWCA approach outperforms – with statistical significance – any other optimization technique in the benchmark in terms of both computation metrics.
       
  • MULTINODAL LOAD FORECASTING FOR DISTRIBUTION SYSTEMS USING A FUZZY-ARTMAP
           NEURAL NETWORK
    • Abstract: Publication date: Available online 3 July 2018Source: Applied Soft ComputingAuthor(s): Thays Abreu, Aline J. Amorim, Carlos R. Santos, Anna D.P. Lotufo, Carlos R. MinussiABSTRACTThis work proposes a predictor system (multinodal forecasting) considering several points of an electrical network, such as substations, transformers, and feeders, based on an adaptive resonance theory (ART) neural network family. It is a problem similar to global forecasting, with the main difference being the strategy to align the input and output of the data with several parallel neural modules. Considering that multinodal prediction is more complex compared to global prediction, the multinodal prediction will use a fuzzy-ARTMAP neural network and a global load participation factor. The advantages of this approach are as follows: (1) the processing time is equivalent to the processing required for global forecasting (i.e., the additional time processing is quite low); and (2) Fuzzy-ARTMAP neural networks converge significantly faster than backpropagation neural networks (improved benchmark in precision). The preference for neural networks of the ART family is due to the characteristic stability and plasticity that these architectures have to provide results in a fast and precise way. To test the proposed forecast system, the results are presented for nine substations from the database of an electrical company.
       
  • Semi-supervised learning based distributed attack detection framework for
           IoT
    • Abstract: Publication date: Available online 1 July 2018Source: Applied Soft ComputingAuthor(s): Shailendra Rathore, Jong Hyuk Park Alongside the development of Internet of Things (IoT), security attacks are also increasing day by day. A number of centralized attack detection mechanisms have been proposed to detect attacks in IoT, wherein an attack detection system is deployed at the central point in the network that collects data from the network and classifies it as “attack” or “normal” using a supervised machine learning algorithm. Note, however, that these mechanisms have failed to achieve significant results due to the distinct requirements of IoT devices, such as scalability, distribution, resource limitations, and low latency. Moreover, the application of supervised machine learning for classification needs a significant amount of labeled data. In this paper, we introduce a fog-based attack detection framework that relies on the fog computing paradigm and a newly proposed ELM-based Semi-supervised Fuzzy C-Means (ESFCM) method. As an extension of cloud computing, fog computing enables attack detection at the network edge and supports distributed attack detection. The ESFCM method uses a semi-supervised fuzzy c-means algorithm to handle the labeled data issue and an Extreme Learning Machine (ELM) algorithm to provide good generalization performance at a faster detection rate. The evaluation was performed on the NSL-KDD dataset, demonstrating that the proposed framework achieved better performance than the centralized attack detection framework. More specifically, it recorded a lower detection time of 11 milliseconds and an accuracy rate of 86.53%.
       
  • A Genetic Programming Method for Feature Mapping to Improve Prediction of
           HIV-1 Protease Cleavage Site
    • Abstract: Publication date: Available online 30 June 2018Source: Applied Soft ComputingAuthor(s): Abdolhossein Fathi, Rasool Sadeghi The human immunodeficiency virus (HIV) is the cause of acquired immunodeficiency syndrome (AIDS), which has profound implications in terms of both economic burden and loss of life. Modeling and examination of the HIV protease cleavage of amino acid sequences can contribute to control of this disease and production of more effective drugs. The present paper introduces a new method for encoding and characterization of amino acid sequences and a new model for the prediction of amino acid sequence cleavage by HIV protease. The proposed encoding scheme utilizes a combination of amino acids’ spatial and structural features in conjunction with 20 amino acid sequences to make sure that their physicochemical and sequencing features are all taken into account. The proposed HIV-1 amino acid cleavage prediction model is developed with the combination of genetic programming and support vector machine. The results of evaluations performed on various datasets demonstrate the superior performance of the proposed encoding and better accuracy of the proposed HIV-1 cleavage prediction model as compared to the state-of-the-art methods.
       
  • Modeling of solar cell under different conditions by Ant Lion Optimizer
           with LambertW function
    • Abstract: Publication date: Available online 30 June 2018Source: Applied Soft ComputingAuthor(s): G. Kanimozhi, Harish Kumar The performance of a solar cell is studied by estimating the internal parameters using single diode model. The environmental operating conditions like temperature and solar irradiance also influence the electrical I-V curves that characterize the photovoltaic cell. In this paper, the parameters of the solar cell are extracted using the desired single diode model equations, expressed using LambertW function under the influence of temperature and irradiance through the experimental I-V data using Ant Lion Optimizer implemented in IPython. The simulated results are validated by the obtained low Root Mean Squared Error. The different internal parameters under varied environmental conditions, obtained by the LambertW-based Ant Lion Optimizer are in good agreement with the literature reports and proves to be an effective technique.Graphical abstractGraphical abstract for this article
       
  • A Methodology for Monitoring Smart Buildings with Incomplete Models
    • Abstract: Publication date: Available online 28 June 2018Source: Applied Soft ComputingAuthor(s): Hamed Khorasgani, Gautam Biswas This paper combines a residual-based diagnosis approach and an unsupervised anomaly detection method for monitoring and fault diagnosis in smart buildings. Typically buildings are very complex, and it is computationally intractable to built accurate diagnosis models for large buildings. However, complete and fairly accurate models can be constructed for components like supply and exhaust fans in buildings. Our proposed method combines a model-based diagnosis approach that uses available models and a data driven approach that uses machine learning techniques along with the additional sensor data available from buildings to update a diagnosis reference model and build more complete diagnosis systems for buildings. To estimate the likelihood of each potential fault in the complex systems, the dependencies between components and, therefore, the sensor measurements need to be considered for accurate diagnosis. In this work, we employ the tree augmented naive Bayesian learning algorithm (TAN) to develop classifiers for fault detection and isolation. TAN structures can accommodate some dependencies between the measurements. We demonstrate and validate the proposed approach using a data-set from an outdoor air unit (OAU) system in the Lentz public health center in Nashville.
       
  • A Novel History-driven Artificial Bee Colony Algorithm for Data Clustering
    • Abstract: Publication date: Available online 28 June 2018Source: Applied Soft ComputingAuthor(s): Farzaneh Zabihi, Babak Nasiri Clustering is a branch of unsupervised learning in which data are divided into groups with similar members called cluster. Since clustering is considered as a NP-hard problem, many swarm intelligence-based algorithms have been applied in recent years to solve this problem effectively. In this paper, a novel variant of Artificial Bee Colony (ABC) algorithm called History-driven Artificial Bee Colony (Hd-ABC) is proposed to improve the ABC’s performance by applying a memory mechanism. The Hd-ABC utilizes a binary space partitioning (BSP) tree to memorize useful information of the evaluated solutions. Applying this memory mechanism, the fitness landscape can be approximated before actual fitness evaluation. Fitness evaluation is a costly and time consuming process in clustering problem, but utilizing the memory mechanism has decreased the number of fitness evaluations significantly and accelerated the optimization process by estimating the fitness value of solutions instead of calculating actual fitness values. Moreover, inspired by guided anisotropic search (GAS) strategy, a new local search mechanism is introduced to improve the exploitation capability as well as the convergence speed of the ABC in the onlooker bee phase. In the proposed algorithm, the GAS strategy incorporates the BSP tree to mutate solutions in the direction of the nearest optimum instead of the random walk of the ABC. Also, in order to improve the global search strategy of the ABC in the scout bee phase, a new mechanism is proposed which produces fitter starter solutions by finding and modifying the worst dimension in each solution. The proposed algorithm, for data clustering has been applied on nine UCI datasets and two artificial datasets. Both the experimental and statistical results show that the proposed algorithm outperforms the original ABC, its variants and the other state-of-art clustering algorithms; and the simulations indicate very promising results in terms of solution quality.
       
  • A Modified Water Cycle Algorithm for Long-term Multi-Reservoir
           Optimization
    • Abstract: Publication date: Available online 26 June 2018Source: Applied Soft ComputingAuthor(s): Yuni Xu, Yadong Mei Hydropower generation is a common consideration for reservoir operations. Due to the characteristic of high dimensionality, multi-reservoir optimization for hydropower generation is challenging. Although the original water cycle algorithm (WCA) outperforms other metaheuristic optimization algorithms in solving constrained and unconstrained problems, it still gets trapped in the local optima in the case of some high-dimensional problems. For multi-reservoir optimization problems, this paper provides a modified WCA based on the diversity evaluation and chaos theory (DC-WCA) to avoid premature convergence. Six benchmark problems are examined using DC-WCA, WCA, improved chaos genetic algorithm (ICGA) and chaotic particle swarm optimization algorithm (CPSO) to evaluate the efficacy of DC-WCA. To investigate the application of DC-WCA for multi-reservoir optimization, a serially linked reservoir system located in the upper reach of the Yangtze river of China is used as a case study. After sensitivity analyses of the parameters, the four algorithms are used for long-term multi-reservoir optimization for hydropower generation. The optimization results show that 1) the annual hydropower generation optimized using DC-WCA is superior to those optimized using the other reported optimizers, and 2) the convergence rate of DC-WCA is faster than those of the other reported optimizers. This case study indicates that the application of DC-WCA for multi-reservoir optimization not only improves the calculation accuracy but also reduces computation time.Graphical abstractKey findings(1) DC-WCA is faster in finding better solutions for benchmark problems, compared to the other reported optimizers.(2) The performance of DC-WCA is highly competitive with the other reported optimizers for the multi-reservoir optimization.Graphical abstract for this articleFig. 1 Comparison of ICGA, CPSO, WCA, and DC-WCA: (a) Benchmark problems; (b) Case study for multi-reservoir optimization
       
  • A hybrid DPSO with Levy flight for scheduling MIMO radar tasks
    • Abstract: Publication date: Available online 23 June 2018Source: Applied Soft ComputingAuthor(s): Haowei Zhang, Junwei Xie, Qiyong Hu, Lei Shao, Tangjun Chen In this paper, an optimization model for the multiple-input and multiple-output (MIMO) radar task scheduling is established, and a hybrid discrete particle swarm optimization (DPSO) algorithm with Levy flight is proposed for a solution to the model. The optimization model takes the task internal structure, the characteristics of task scheduling in the MIMO radar and the three task scheduling principles into consideration. The hybrid DPSO integrates a heuristic task interleaving algorithm for the task schedulability analysis of candidate scheduling schemes (particles) with a DPSO with Levy flight for exploring the best solution. The heuristic task interleaving algorithm not only exploits the wait interval to interleave subtasks, but also incorporates transmit intervals and overlaps receive intervals in order to make a maximum utilization of the radar timeline. The DPSO is combined with Levy flight to escape from local optima by utilizing the long jump property. In addition, the chaos initialization and the linearly decreasing inertia weight are designed to enhance the exploration ability and the exploitation ability. The simulation results verify the outperformance of the proposed algorithm compared with the existing ones.
       
  • Hybrid meta-heuristic algorithms for independent job scheduling in grid
           computing
    • Abstract: Publication date: Available online 26 May 2018Source: Applied Soft ComputingAuthor(s): Muhanad Tahrir Younis, Shengxiang Yang The term ‘grid computing’ is used to describe an infrastructure that connects geographically distributed computers and heterogeneous platforms owned by multiple organizations allowing their computational power, storage capabilities and other resources to be selected and shared. The job scheduling problem is recognized as being one of the most important and challenging issues in grid computing environments. This paper proposes two strongly coupled hybrid meta-heuristic schedulers. The first scheduler combines Ant Colony Optimization and Variable Neighbourhood Search in which the former acts as the primary algorithm which, during its execution, calls the latter as a supporting algorithm, while the second merges the Genetic Algorithm with Variable Neighbourhood Search in the same fashion. Several experiments were carried out to analyze the performance of the proposed schedulers in terms of minimizing the makespan using well known benchmarks. The experiments show that the proposed schedulers achieved impressive results compared to other selected approaches from the bibliography.
       
  • Fuzzy GRASP with path relinking for the Risk-constrained Cash-in-Transit
           Vehicle Routing Problem
    • Abstract: Publication date: Available online 21 May 2018Source: Applied Soft ComputingAuthor(s): Nina Radojičić, Aleksandar Djenić, Miroslav Marić This paper considers a special case of famous vehicle routing problem with additional risk constraints, called the Risk-constrained Cash-in-Transit Vehicle Routing Problem (RCTVRP). We propose a fuzzy GRASP (Greedy Randomized Adaptive Search Procedure) hybridized with path relinking (PR) methodology for solving the RCTVRP. Introduced PR structure, which can be used for other vehicle routing problems, is implemented. To make the algorithm's time complexity smaller, new data structure for the RCTVRP is incorporated. Proposed fuzzy GRASP with PR hybrid shows better computational performance compared to its non-fuzzy version. Furthermore, computational results on publicly available data sets indicate that proposed algorithm outperforms all existing methods from the literature for solving the RCTVRP.
       
  • Physics of failure-based reliability prediction of turbine blades using
           multi-source information fusion
    • Abstract: Publication date: Available online 19 May 2018Source: Applied Soft ComputingAuthor(s): He Li, Hong-Zhong Huang, Yan-Feng Li, Jie Zhou, Jinhua Mi Fatigue and fracture of turbine blades are fatal to aero engines. Reliability prediction of aero engines is indispensable to guarantee their safety. For turbine blades of aero engines, most recent research works only focus on the number of cycles and excavate information from a single source. To remove these limitations, a Physics of failure-based reliability prediction method using multi-source information fusion has been developed in this paper to predict the reliability of turbine blades of aero engines. In the proposed method, the fuzzy theory is employed to represent uncertainties involved in prediction. Case studies of reliability prediction under fuzzy stress with and without fuzzy strength are conducted by using a dynamic stress-strength interference model which takes types of cycles of aero engines into consideration. Results indicate that the proposed method is better in line with engineering practice and more flexible in decision making and it can predict the reliability of aero engine turbine blades to be an interval by utilizing the proposed linear fusion algorithm. In addition, the predicted interval contains results that are predicted by other commonly used information fusion methods Hence, the proposed method conduces to remove confusion made by selection of multiple methods.Graphical abstractGraphical abstract for this article
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.81.102.236
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-