for Journals by Title or ISSN
for Articles by Keywords
help
  Subjects -> COMPUTER SCIENCE (Total: 2102 journals)
    - ANIMATION AND SIMULATION (31 journals)
    - ARTIFICIAL INTELLIGENCE (103 journals)
    - AUTOMATION AND ROBOTICS (105 journals)
    - CLOUD COMPUTING AND NETWORKS (64 journals)
    - COMPUTER ARCHITECTURE (10 journals)
    - COMPUTER ENGINEERING (11 journals)
    - COMPUTER GAMES (21 journals)
    - COMPUTER PROGRAMMING (26 journals)
    - COMPUTER SCIENCE (1221 journals)
    - COMPUTER SECURITY (47 journals)
    - DATA BASE MANAGEMENT (14 journals)
    - DATA MINING (36 journals)
    - E-BUSINESS (22 journals)
    - E-LEARNING (30 journals)
    - ELECTRONIC DATA PROCESSING (22 journals)
    - IMAGE AND VIDEO PROCESSING (40 journals)
    - INFORMATION SYSTEMS (108 journals)
    - INTERNET (95 journals)
    - SOCIAL WEB (53 journals)
    - SOFTWARE (34 journals)
    - THEORY OF COMPUTING (9 journals)

COMPUTER SCIENCE (1221 journals)                  1 2 3 4 5 6 7 | Last

Showing 1 - 200 of 872 Journals sorted alphabetically
3D Printing and Additive Manufacturing     Full-text available via subscription   (Followers: 21)
Abakós     Open Access   (Followers: 4)
ACM Computing Surveys     Hybrid Journal   (Followers: 29)
ACM Journal on Computing and Cultural Heritage     Hybrid Journal   (Followers: 8)
ACM Journal on Emerging Technologies in Computing Systems     Hybrid Journal   (Followers: 16)
ACM Transactions on Accessible Computing (TACCESS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 15)
ACM Transactions on Applied Perception (TAP)     Hybrid Journal   (Followers: 5)
ACM Transactions on Architecture and Code Optimization (TACO)     Hybrid Journal   (Followers: 9)
ACM Transactions on Autonomous and Adaptive Systems (TAAS)     Hybrid Journal   (Followers: 9)
ACM Transactions on Computation Theory (TOCT)     Hybrid Journal   (Followers: 12)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 3)
ACM Transactions on Computer Systems (TOCS)     Hybrid Journal   (Followers: 18)
ACM Transactions on Computer-Human Interaction     Hybrid Journal   (Followers: 15)
ACM Transactions on Computing Education (TOCE)     Hybrid Journal   (Followers: 6)
ACM Transactions on Design Automation of Electronic Systems (TODAES)     Hybrid Journal   (Followers: 6)
ACM Transactions on Economics and Computation     Hybrid Journal   (Followers: 1)
ACM Transactions on Embedded Computing Systems (TECS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Information Systems (TOIS)     Hybrid Journal   (Followers: 20)
ACM Transactions on Intelligent Systems and Technology (TIST)     Hybrid Journal   (Followers: 8)
ACM Transactions on Interactive Intelligent Systems (TiiS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Reconfigurable Technology and Systems (TRETS)     Hybrid Journal   (Followers: 6)
ACM Transactions on Sensor Networks (TOSN)     Hybrid Journal   (Followers: 8)
ACM Transactions on Speech and Language Processing (TSLP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Storage     Hybrid Journal  
ACS Applied Materials & Interfaces     Hybrid Journal   (Followers: 32)
Acta Automatica Sinica     Full-text available via subscription   (Followers: 2)
Acta Informatica Malaysia     Open Access  
Acta Universitatis Cibiniensis. Technical Series     Open Access  
Ad Hoc Networks     Hybrid Journal   (Followers: 11)
Adaptive Behavior     Hybrid Journal   (Followers: 10)
Advanced Engineering Materials     Hybrid Journal   (Followers: 28)
Advanced Science Letters     Full-text available via subscription   (Followers: 11)
Advances in Adaptive Data Analysis     Hybrid Journal   (Followers: 7)
Advances in Artificial Intelligence     Open Access   (Followers: 15)
Advances in Calculus of Variations     Hybrid Journal   (Followers: 4)
Advances in Catalysis     Full-text available via subscription   (Followers: 5)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 19)
Advances in Computer Engineering     Open Access   (Followers: 4)
Advances in Computer Science : an International Journal     Open Access   (Followers: 14)
Advances in Computing     Open Access   (Followers: 2)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 56)
Advances in Engineering Software     Hybrid Journal   (Followers: 28)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 14)
Advances in Human Factors/Ergonomics     Full-text available via subscription   (Followers: 22)
Advances in Human-Computer Interaction     Open Access   (Followers: 20)
Advances in Materials Science     Open Access   (Followers: 14)
Advances in Operations Research     Open Access   (Followers: 12)
Advances in Parallel Computing     Full-text available via subscription   (Followers: 7)
Advances in Porous Media     Full-text available via subscription   (Followers: 5)
Advances in Remote Sensing     Open Access   (Followers: 49)
Advances in Science and Research (ASR)     Open Access   (Followers: 6)
Advances in Technology Innovation     Open Access   (Followers: 6)
AEU - International Journal of Electronics and Communications     Hybrid Journal   (Followers: 8)
African Journal of Information and Communication     Open Access   (Followers: 9)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 4)
AI EDAM     Hybrid Journal   (Followers: 1)
Air, Soil & Water Research     Open Access   (Followers: 12)
AIS Transactions on Human-Computer Interaction     Open Access   (Followers: 6)
Algebras and Representation Theory     Hybrid Journal   (Followers: 1)
Algorithms     Open Access   (Followers: 11)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 5)
American Journal of Computational Mathematics     Open Access   (Followers: 4)
American Journal of Information Systems     Open Access   (Followers: 6)
American Journal of Sensor Technology     Open Access   (Followers: 4)
Anais da Academia Brasileira de Ciências     Open Access   (Followers: 2)
Analog Integrated Circuits and Signal Processing     Hybrid Journal   (Followers: 7)
Analysis in Theory and Applications     Hybrid Journal   (Followers: 1)
Animation Practice, Process & Production     Hybrid Journal   (Followers: 5)
Annals of Combinatorics     Hybrid Journal   (Followers: 4)
Annals of Data Science     Hybrid Journal   (Followers: 12)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 13)
Annals of Pure and Applied Logic     Open Access   (Followers: 3)
Annals of Software Engineering     Hybrid Journal   (Followers: 13)
Annals of West University of Timisoara - Mathematics and Computer Science     Open Access  
Annual Reviews in Control     Hybrid Journal   (Followers: 8)
Anuario Americanista Europeo     Open Access  
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 2)
Applied and Computational Harmonic Analysis     Full-text available via subscription   (Followers: 1)
Applied Artificial Intelligence: An International Journal     Hybrid Journal   (Followers: 12)
Applied Categorical Structures     Hybrid Journal   (Followers: 3)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 13)
Applied Computer Systems     Open Access   (Followers: 2)
Applied Informatics     Open Access  
Applied Mathematics and Computation     Hybrid Journal   (Followers: 33)
Applied Medical Informatics     Open Access   (Followers: 10)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 5)
Applied Soft Computing     Hybrid Journal   (Followers: 16)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 5)
Applied System Innovation     Open Access  
Architectural Theory Review     Hybrid Journal   (Followers: 3)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 5)
Archive of Numerical Software     Open Access  
Archives and Museum Informatics     Hybrid Journal   (Followers: 144)
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 5)
arq: Architectural Research Quarterly     Hybrid Journal   (Followers: 8)
Artifact     Hybrid Journal   (Followers: 2)
Artificial Life     Hybrid Journal   (Followers: 7)
Asia Pacific Journal on Computational Engineering     Open Access  
Asia-Pacific Journal of Information Technology and Multimedia     Open Access   (Followers: 1)
Asian Journal of Computer Science and Information Technology     Open Access  
Asian Journal of Control     Hybrid Journal  
Assembly Automation     Hybrid Journal   (Followers: 2)
at - Automatisierungstechnik     Hybrid Journal   (Followers: 1)
Australian Educational Computing     Open Access   (Followers: 1)
Automatic Control and Computer Sciences     Hybrid Journal   (Followers: 5)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Automatica     Hybrid Journal   (Followers: 13)
Automation in Construction     Hybrid Journal   (Followers: 7)
Autonomous Mental Development, IEEE Transactions on     Hybrid Journal   (Followers: 8)
Basin Research     Hybrid Journal   (Followers: 5)
Behaviour & Information Technology     Hybrid Journal   (Followers: 52)
Big Data and Cognitive Computing     Open Access   (Followers: 2)
Biodiversity Information Science and Standards     Open Access  
Bioinformatics     Hybrid Journal   (Followers: 309)
Biomedical Engineering     Hybrid Journal   (Followers: 16)
Biomedical Engineering and Computational Biology     Open Access   (Followers: 13)
Biomedical Engineering, IEEE Reviews in     Full-text available via subscription   (Followers: 20)
Biomedical Engineering, IEEE Transactions on     Hybrid Journal   (Followers: 35)
Briefings in Bioinformatics     Hybrid Journal   (Followers: 50)
British Journal of Educational Technology     Hybrid Journal   (Followers: 149)
Broadcasting, IEEE Transactions on     Hybrid Journal   (Followers: 12)
c't Magazin fuer Computertechnik     Full-text available via subscription   (Followers: 1)
CALCOLO     Hybrid Journal  
Calphad     Hybrid Journal   (Followers: 2)
Canadian Journal of Electrical and Computer Engineering     Full-text available via subscription   (Followers: 15)
Capturing Intelligence     Full-text available via subscription  
Catalysis in Industry     Hybrid Journal   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 2)
Cell Communication and Signaling     Open Access   (Followers: 2)
Central European Journal of Computer Science     Hybrid Journal   (Followers: 5)
CERN IdeaSquare Journal of Experimental Innovation     Open Access   (Followers: 3)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 3)
Chemometrics and Intelligent Laboratory Systems     Hybrid Journal   (Followers: 15)
ChemSusChem     Hybrid Journal   (Followers: 7)
China Communications     Full-text available via subscription   (Followers: 8)
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
CIN Computers Informatics Nursing     Hybrid Journal   (Followers: 11)
Circuits and Systems     Open Access   (Followers: 15)
Clean Air Journal     Full-text available via subscription   (Followers: 1)
CLEI Electronic Journal     Open Access  
Clin-Alert     Hybrid Journal   (Followers: 1)
Clinical eHealth     Open Access  
Cluster Computing     Hybrid Journal   (Followers: 2)
Cognitive Computation     Hybrid Journal   (Followers: 4)
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 4)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 14)
Communication Methods and Measures     Hybrid Journal   (Followers: 12)
Communication Theory     Hybrid Journal   (Followers: 23)
Communications Engineer     Hybrid Journal   (Followers: 1)
Communications in Algebra     Hybrid Journal   (Followers: 3)
Communications in Computational Physics     Full-text available via subscription   (Followers: 2)
Communications in Information Science and Management Engineering     Open Access   (Followers: 4)
Communications in Partial Differential Equations     Hybrid Journal   (Followers: 3)
Communications of the ACM     Full-text available via subscription   (Followers: 51)
Communications of the Association for Information Systems     Open Access   (Followers: 16)
COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering     Hybrid Journal   (Followers: 3)
Complex & Intelligent Systems     Open Access   (Followers: 1)
Complex Adaptive Systems Modeling     Open Access  
Complex Analysis and Operator Theory     Hybrid Journal   (Followers: 2)
Complexity     Hybrid Journal   (Followers: 6)
Complexus     Full-text available via subscription  
Composite Materials Series     Full-text available via subscription   (Followers: 8)
Computación y Sistemas     Open Access  
Computation     Open Access   (Followers: 1)
Computational and Applied Mathematics     Hybrid Journal   (Followers: 3)
Computational and Mathematical Biophysics     Open Access   (Followers: 1)
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 2)
Computational and Structural Biotechnology Journal     Open Access   (Followers: 2)
Computational and Theoretical Chemistry     Hybrid Journal   (Followers: 9)
Computational Astrophysics and Cosmology     Open Access   (Followers: 1)
Computational Biology and Chemistry     Hybrid Journal   (Followers: 12)
Computational Chemistry     Open Access   (Followers: 2)
Computational Cognitive Science     Open Access   (Followers: 2)
Computational Complexity     Hybrid Journal   (Followers: 4)
Computational Condensed Matter     Open Access  
Computational Ecology and Software     Open Access   (Followers: 9)
Computational Economics     Hybrid Journal   (Followers: 9)
Computational Geosciences     Hybrid Journal   (Followers: 17)
Computational Linguistics     Open Access   (Followers: 23)
Computational Management Science     Hybrid Journal  
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 8)
Computational Mechanics     Hybrid Journal   (Followers: 5)
Computational Methods and Function Theory     Hybrid Journal  
Computational Molecular Bioscience     Open Access   (Followers: 2)
Computational Optimization and Applications     Hybrid Journal   (Followers: 8)
Computational Particle Mechanics     Hybrid Journal   (Followers: 1)
Computational Research     Open Access   (Followers: 1)
Computational Science and Discovery     Full-text available via subscription   (Followers: 2)
Computational Science and Techniques     Open Access  
Computational Statistics     Hybrid Journal   (Followers: 14)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 30)
Computer     Full-text available via subscription   (Followers: 99)
Computer Aided Surgery     Open Access   (Followers: 6)
Computer Applications in Engineering Education     Hybrid Journal   (Followers: 8)
Computer Communications     Hybrid Journal   (Followers: 16)
Computer Journal     Hybrid Journal   (Followers: 9)

        1 2 3 4 5 6 7 | Last

Journal Cover
Applied Soft Computing
Journal Prestige (SJR): 1.199
Citation Impact (citeScore): 5
Number of Followers: 16  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1568-4946
Published by Elsevier Homepage  [3162 journals]
  • Hybrid Variational Mode Decomposition and evolutionary robust kernel
           extreme learning machine for stock price and movement prediction on daily
           basis
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Ranjeeta Bisoi, P.K. Dash, A.K. Parida The Empirical Mode Decomposition (EMD) has been applied successfully in many forecasting problems. The Variational Mode Decomposition (VMD), a more effective decomposition technique has been proposed with an aim to avoid the limitations of EMD. This study focuses on two objectives i.e. day ahead stock price prediction and daily trend prediction using Robust Kernel based Extreme Learning Machine (RKELM) integrated with VMD where the kernel function parameters optimized with Differential Evolution (DE) algorithm here named as DE-VMD-RKELM. These experiments have been conducted on BSE S&P 500 Index (BSE), Hang Seng Index (HSI) and Financial Times Stock Exchange 100 Index (FTSE), and the daily price prediction performance of the proposed VMD-RKELM model is measured in terms of Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). On the other hand the daily trend prediction which is defined as a classification problem is measured in terms of Percentage of Correct Classification Accuracy (PCCA). The prediction performance of the VMD-RKELM is compared with the performance of robust Extreme Learning Machine (RELM), Extreme Learning Machine integrated with EMD (EMD-RELM). Robust Kernel Extreme Learning Machine integrated with EMD (EMD-RKELM) and two benchmark approaches i.e. Support Vector Regression (SVR) and Autoregressive Moving Average (ARMA). The trend prediction results are compared with Naive–Bayes classifier, ANN (artificial neural network), and SVM (support vector machine). The experimental results obtained from this study for price prediction as well as trend classification performance are promising and the prediction analysis illustrated in this work proves the superiority of the VMD-RKELM model over the other predictive methods.Graphical abstractGraphical abstract for this article
       
  • MOEA/D with angle-based constrained dominance principle for constrained
           multi-objective optimization problems
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Zhun Fan, Yi Fang, Wenji Li, Xinye Cai, Caimin Wei, Erik Goodman This paper proposes a novel constraint-handling mechanism, namely the angle-based constrained dominance principle (ACDP), to solve constrained multi-objective optimization problems (CMOPs). In this work, the mechanism of ACDP is embedded in a decomposition-based multi-objective evolutionary algorithm (MOEA/D). ACDP uses the angle information among solutions of a population and the proportion of feasible solutions to adjust the dominance relationship, so that it can maintain good convergence, diversity and feasibility of a population, simultaneously. To evaluate the performance of the proposed MOEA/D-ACDP, fourteen benchmark instances and an engineering optimization problem are studied. Six state-of-the-art CMOEAs, including C-MOEA/D, MOEA/D-CDP, MOEA/D-Epsilon, MOEA/D-SR, NSGA-II-CDP and SP, are compared. The experimental results illustrate that MOEA/D-ACDP is significantly better than the other six CMOEAs on these benchmark problems and the real-world case, which demonstrates the effectiveness of ACDP.
       
  • A Novel fuzzy frame selection based watermarking scheme for MPEG-4 videos
           using Bi-directional extreme learning machine
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Ankit Rajpal, Anurag Mishra, Rajni Bala With the advancement of Internet Technology, a lot of multimedia content is shared over the Internet resulting in several concerns over multimedia content like copyright protection, content authentication, and ownership. In this paper, we have developed a novel watermarking scheme for Advanced Video Coding (MPEG-4 AVC) videos using Bi-directional Extreme Learning Machine (B-ELM). Relevant frames to be watermarked are identified by implementing a newly developed fuzzy algorithm based on threshold operation. A binary watermark, encrypted by transposition cipher, is used to ensure enhanced security. The proposed watermarking scheme has been tested on five benchmark videos (Akiyo, Hall_Monitor, Mother_Daughter, Pamphlet, and Silent). High average Peak Signal to Noise Ratio (PSNR) values of the signed video sequences indicate good visual quality post-embedding. Common video interferences such as scaling, noising, cropping, filtering, frame dropping, frame averaging, and frame swapping are used to evaluate the robustness of the watermark embedding scheme. The high values of Normalized Correlation (NC(W,W′)) and Bit-Error-Rate (BER(W,W′)) between the original watermark (W) and the recovered watermark (W′) establish the good robustness of the proposed scheme. While the established algorithms such as Fuzzy Inference System (FIS), Back Propagation Network (BPN), Meta-heuristic techniques take time of the order of several seconds to process a frame, the proposed algorithm consumes only a few milliseconds. Thus, it is well suited for the real-time watermarking of compressed videos.
       
  • Time–frequency masking based supervised speech enhancement framework
           using fuzzy deep belief network
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Suman Samui, Indrajit Chakrabarti, Soumya K. Ghosh In recent years, deep learning based supervised speech enhancement methods have gained a considerable amount of research attention over the statistical signal processing based methods. In this study, we have considered the time–frequency masking based deep learning framework for speech enhancement and investigated how the performance of these methods can be improved further. We have mainly established that significant performance improvement can be achieved if the deep neural network (DNN) is pre-trained by using Fuzzy Restricted Boltzmann Machines (FRBM) rather than using regular Restricted Boltzmann Machines (RBM). This is mainly because of the fact that the performance of FRBM is more robust and effective when the training data is noisy. In order to train an FRBM, we have adopted a defuzzification method based on the crisp probabilistic mean value of fuzzy numbers. The detailed theory of training strategy of an FRBM with different fuzzy membership functions such as Symmetric Triangular Fuzzy Numbers (STFN) and Asymmetric Triangular Fuzzy Numbers (ATFN) is presented. Furthermore, we have evaluated the performance of the proposed training strategies on different DNN based Speech Enhancement Systems (SES) which are developed based on different training targets such as Complex Ideal Ratio Mask (cIRM), Ideal Ratio Mask (IRM) and Phase-Sensitive Mask (PSM). Experimental results on various noise scenarios have shown that the DNN-based speech enhancement system trained by the proposed approach ensures a consistent improvement in various objective measure scores of perceived speech quality and intelligibility while compared to the conventional DNN-based speech enhancement methods which use regular RBM for unsupervised pre-training.Graphical abstractGraphical abstract for this article
       
  • Interpretable fuzzy partitioning of classified data with variable
           granularity
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Ciro Castiello, Anna Maria Fanelli, Marco Lucarelli, Corrado Mencar Fuzzy rule-based systems are effective tools for acquiring knowledge from data and represent it in a linguistically interpretable form. To achieve interpretability, input features are granulated in fuzzy partitions. A critical design decision is the selection of the granularity level for each input feature. This paper presents an approach, called DC* (Double Clustering with A*), for automatically designing interpretable fuzzy partitions with optimal granularity. DC* is specific for classification problems and is mainly based on a two-stage process: the first stage identifies clusters of multidimensional samples in order to derive class-labeled prototypes; in the second stage the one-dimensional projections of such prototypes are further clustered along each dimension simultaneously, thus minimizing the number of clusters for each feature. Moreover, the resulting one-dimensional clusters provide information to define fuzzy partitions that satisfy a number of interpretability constraints and exhibit variable granularity levels. The fuzzy sets in each partition can be labeled by meaningful linguistic terms and used to represent knowledge in a natural language form. Experimental results on both synthetic and real data show that the derived fuzzy partitions can be exploited to define very compact fuzzy rule-based systems that exhibit high linguistic interpretability and good classification accuracy.
       
  • An automated approach for the design of Mechanically Stabilized Earth
           Walls incorporating metaheuristic optimization algorithms
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Yagizer Yalcin, Murat Orhon, Onur Pekcan Considered as cost-efficient, reliable and aesthetic alternatives to the conventional retaining structures, Mechanically Stabilized Earth Walls (MSEWs) have been increasingly used in civil engineering practice over the previous decades. The design of these structures is conventionally based on engineering guidelines, requiring the use of trial and error approaches to determine the design variables. Therefore, the quality and cost effectiveness of the design is limited with the effort, intuition, and experience of the engineer while the process transpires to be time-consuming, both of which can be solved by developing automated approaches. In order to address these issues, the present study introduces a novel framework to optimize the (i) reinforcement type, (ii) length, and (iii) layout of MSEWs for minimum cost, integrating metaheuristic optimization algorithms in compliance with the Federal Highway Administration guidelines. The framework is conjoined with optimization algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Differential Evolution (DE) and tested with a set of benchmark design problems that incorporate various types of MSEWs with different heights. The results are comparatively evaluated to assess the most effective optimization algorithm and validated using a well-known MSEW analysis and design software. The outcomes indicate that the proposed framework, implemented with a powerful optimization algorithm, can effectively produce the optimum design in a matter of seconds. In this sense, DE algorithm is proposed based on the improved results over GA, PSO, and ABC.
       
  • A hybrid intelligent model for network selection in the industrial
           Internet of Things
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Shidrokh Goudarzi, Mohammad Hossein Anisi, Abdul Hanan Abdullah, Jaime Lloret, Seyed Ahmad Soleymani, Wan Haslina Hassan Industrial Internet of Things (IIoT) plays an important role in increasing productivity and efficiency in heterogeneous wireless networks. However, different domains such as industrial wireless scenarios, small cell domains and vehicular ad hoc networks (VANET) require an efficient machine learning/intelligent algorithm to process the vertical handover decision that can maintain mobile terminals (MTs) in the preferable networks for a sufficient duration of time. The preferred quality of service parameters can be differentiated from all the other MTs. Hence, in this paper, the problem with the vertical handoff (VHO) decision is articulated as the process of the Markov decision aimed to maximize the anticipated total rewards as well as to minimize the handoffs’ average count. A rewards function is designed to evaluate the QoS at the point of when the connections take place, as that is where the policy decision for a stationary deterministic handoff can be established. The proposed hybrid model merges the biogeography-based optimization (BBO) with the Markov decision process (MDP). The MDP is utilized to establish the radio access technology (RAT) selection’s probability that behaves as an input to the BBO process. Therefore, the BBO determines the best RAT using the described multi-point algorithm in the heterogeneous network. The numerical findings display the superiority of this paper’s proposed schemes in comparison with other available algorithms. The findings shown that the MDP-BBO algorithm is able to outperform other algorithms in terms of number of handoffs, bandwidth availability, and decision delays. Our algorithm displayed better expected total rewards as well as a reduced average account of handoffs compared to current approaches. Simulation results obtained from Monte-Carlo experiments prove validity of the proposed model.
       
  • Feature selection based on artificial bee colony and gradient boosting
           decision tree
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Haidi Rao, Xianzhang Shi, Ahoussou Kouassi Rodrigue, Juanjuan Feng, Yingchun Xia, Mohamed Elhoseny, Xiaohui Yuan, Lichuan Gu Data from many real-world applications can be high dimensional and features of such data are usually highly redundant. Identifying informative features has become an important step for data mining to not only circumvent the curse of dimensionality but to reduce the amount of data for processing. In this paper, we propose a novel feature selection method based on bee colony and gradient boosting decision tree aiming at addressing problems such as efficiency and informative quality of the selected features. Our method achieves global optimization of the inputs of the decision tree using the bee colony algorithm to identify the informative features. The method initializes the feature space spanned by the dataset. Less relevant features are suppressed according to the information they contribute to the decision making using an artificial bee colony algorithm. Experiments are conducted with two breast cancer datasets and six datasets from the public data repository. Experimental results demonstrate that the proposed method effectively reduces the dimensions of the dataset and achieves superior classification accuracy using the selected features.
       
  • Indifference threshold-based attribute ratio analysis: A method for
           assigning the weights to the attributes in multiple attribute decision
           making
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Mohammad Ali Hatefi In this paper, a new method called Indifference Threshold-based Attribute Ratio Analysis (ITARA for short) is proposed to assign the weights to the attributes in Multiple Attribute Decision Making (MADM) problems. The proposed method is based on the concept of “Indifference Threshold (IT)” and belongs to a group of techniques which are based on measuring data dispersion. The proposed technique is applied to solve a numerical example. Additionally, a simulation experiment is designed to investigate the performance of the proposed method and other relevant methods. The results indicate that the proposed method is substantially superior to other methods in almost all the cases studied.
       
  • Efficient solution algorithms for a time-critical reliable transportation
           problem in multigraph networks with FIFO property
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Hamid Tikani, Mostafa Setak Relief distribution in urban environments is one of the major activities in emergency logistics management. The effective and time-saving dispatching process in affected areas is pivotal in rescue operations. In this study, we formulate a reliable time-dependent vehicle routing problem with time windows in a multigraph based network. In such networks, there exist parallel arcs with multiple attributes between nodes. The purpose of the provided model is to minimize delays in delivering prioritized items in disaster response operations. It also controls the minimum reliability of each route. Controlling the reliability in relief distribution gives this assurance that emergency packages on vehicles can reach their destinations safely and in a timely manner. In order to solve the problem, a novel restricted dynamic programming is applied to the problem through the giant-tour representation. The proposed algorithm can reach the optimal solution when utilized in an unrestricted way. In addition, a modified caching genetic algorithm and a three-phase optimization method based on the tabu search heuristic are provided to deal with larger instances in reasonable computation times. Finally, a real transportation case is presented to illustrate the potential applicability of the model in urban environments. The results accentuate the efficiency of the proposed methods and show the significance of multigraph to accelerate the distribution operations for reliable emergency logistics planning.Graphical abstractGraphical abstract for this article
       
  • Multi-criteria game model based on the pairwise comparisons of strategies
           with Z-numbers
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Hong-gang Peng, Xiao-kang Wang, Tie-li Wang, Jian-qiang Wang The Z-number is the most general representation of real-life information with reliability, and it also exhibits the highest descriptive power from the perspective of human cognition. This study focuses on the development of a computationally simple and effective Z-number model to address game problems by systematically comparing pairwise strategies derived from different players. The processing of Z-numbers requires computing fuzzy and probabilistic uncertainties; then, the cloud model is suggested to handle bimodal restriction in Z-numbers. In this manner, a novel concept of asymmetric normal Z-value (ANZ) is proposed. Subsequently, the concordance/discordance index and outranking relations of ANZs are suggested based on classic outranking rules. Next, an innovative game model is established by modeling the outranking relations of strategies of different players under multiple criteria. Finally, an illustrative example concerning enterprise market competition is provided to demonstrate the established model.
       
  • Couple-based particle swarm optimization for short-term hydrothermal
           scheduling
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Yuqiang Wu, Yonggang Wu, Xinglong Liu A novel couple-based particle swarm optimization (CPSO) is presented in this paper, and applied to solve the short-term hydrothermal scheduling (STHS) problem. In CPSO, three improvements are proposed compared to the canonical particle swarm optimization, aimed at overcoming the premature convergence problem. Dynamic particle couples, a unique sub-group structure in maintaining population diversity, is adopted as the population topology, in which every two particles compose a particle couple randomly in each iteration. Based on this topology, an intersectional learning strategy using the partner learning information of last iteration is employed in every particle couple, which can automatically reveal useful history information and reduce the overly rapid evolution speed. Meanwhile, the coefficients of each particle in a particle couple are set as distinct so that the particle movement patterns can be described and controlled more precisely. In order to demonstrate the effectiveness of our proposed CPSO, the algorithm is firstly tested with four multimodal benchmark functions, and then applied to solve an engineering multimodal problem known as STHS, in which two typical test systems with four different cases are tested, and the results are compared with those of other evolutionary methods published in the literature.
       
  • Fast Laplacian twin support vector machine with active learning for
           pattern classification
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Reshma Rastogi, Sweta Sharma In this paper, we propose a semi-supervised classifier termed as Fast Laplacian Twin Support Vector Machine (FLap−TWSVM) with an objective to reduce the requirement of labeled data and simultaneously lessen the training time complexity of a traditional Laplacian Twin Support Vector Machine semi-supervised classifier. FLap−TWSVM is faster than existing Laplacian twin support vector machine as it solves a smaller size Quadratic Programming Problem (QPP) along with an Unconstrained Minimization Problem (UMP) to obtain decision hyperplanes which can also handle heteroscedastic noise present in the training data. Traditional semi-supervised classifiers generally have no explicit control over the choice of labeled data available for training, hence to overcome this limitation, we propose a pool-based active learning framework which identifies most informative examples to train the learning model. Moreover, the aforementioned framework has been extended to deal with multi-category classification scenarios. Several experiments have been performed on machine learning benchmark datasets which proves the utility of the proposed classifier over traditional Laplacian Twin Support Vector Machine (Lap−TWSVM) and active learning based Support Vector Machine (SVMAL). The efficacy of the proposed framework has also been tested on human activity recognition problem and content based image retrieval system.Graphical abstractGraphical abstract for this article
       
  • A hybridization of extended Kalman filter and Ant Colony Optimization for
           state estimation of nonlinear systems
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Hadi Nobahari, Alireza Sharifi In this paper, a new nonlinear heuristic filter based on the hybridization of an extended Kalman filter and an ant colony estimator is proposed to estimate the states of a nonlinear system. In this filter, a group of virtual ants searches the state space stochastically and dynamically to find and track the best state estimation while the position of each ant is updated at the measurement time using the extended Kalman filter. The performance of the proposed filter is compared with well-known heuristic filters using a nonlinear benchmark problem. The statistical results show that this algorithm is able to provide promising and competitive results. Then, the new filter is tested on a nonlinear engineering problem with more than one state. The problem is to estimate simultaneously the states of an unmanned aerial vehicle as well as the wind disturbances, applied to the system. In this case, a processor-in-the-loop experiment is also performed to verify the implementation capability of the proposed approach. This paper also investigates the real-time implementation capability of the proposed filter in the attitude estimation of a three degrees of freedom experimental setup of a quadrotor to further investigate its effectiveness in practice.Graphical abstractGraphical abstract for this article
       
  • Mixed grey wolf optimizer for the joint denoising and unmixing of
           multispectral images
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Benoit Martin, Julien Marot, Salah Bourennane Grey wolf optimizer (GWO) is a bio-inspired iterative optimization algorithm which simulates the hunting behaviour of a pack of wolves. Their position is updated across iterations in a search space under the leadership of the α, β, and δ wolves. In this work, a novel mixed GWO is proposed, which, for the first time, searches for parameter values in both continuous and discrete spaces. In the proposed approach, the update rules are as follows: the leaders guide the hunt, assisted by two random wolves during the first half of the iterations; for the continuous parameters, a weighted combination of the leaders’ contribution is calculated; for the discrete parameters, a random selection is performed instead, and the probability for the α to be selected increases at the detriment of the other leaders across iterations. The exploration and exploitation phases are distinguished for continuous and discrete parameters. The proposed mixed GWO is compared against other bio-inspired optimization methods, using several test problems which are either continuous, discrete or mixed: the proposed algorithm can significantly improve the performance metrics. Moreover, our method is adapted to simultaneously denoise and unmix real-world multispectral images.
       
  • Dynamic parameter adaptation in metaheuristics using gradient
           approximation and line search
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Vasileios A. Tatsis, Konstantinos E. Parsopoulos Metaheuristics have been established as effective solvers for challenging optimization problems. However, their performance is highly dependent on their parameter settings. For this reason, various parameter tuning techniques have been developed, spanning two major categories: online and offline techniques. Online techniques are based on performance data of the algorithm during its run, while offline techniques are based on preprocessing or historical performance data of the algorithm. Alternatively to these techniques, we propose a general online parameter adaptation method based on estimations of the algorithm’s performance and gradient search in the parameter domain. The proposed method is demonstrated on Differential Evolution, a state-of-the-art metaheuristic for continuous optimization. Our experimental validation includes problems of low and high dimension as well as comparisons with distinguished adaptive algorithms. The obtained results suggest that the proposed approach is beneficial, relieving the user from the burden of proper parameterization.Graphical abstractGraphical abstract for this article
       
  • Learning of fuzzy cognitive maps using a niching-based multi-modal
           multi-agent genetic algorithm
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Ze Yang, Jing Liu Fuzzy cognitive maps (FCMs) are generally applied to model and analyze complex dynamical systems. To learn the FCM weight matrix, various efficient learning algorithms have been proposed. However, those algorithms only learn one FCM from data once. Learning only one FCM is not enough for modeling and analyzing complex dynamical systems because the learned FCM may be just a local optimum. To solve this problem, we tend to learn multiple FCMs simultaneously. To this end, the FCM learning problem is modeled as a multi-modal optimization problem. So far, niching is the most adopted method to deal with multi-modal optimization. Thus, a multi-agent genetic algorithm (MAGA), which is a popular numerical optimization algorithm, is combined with current niching methods. Then, a niching-based multi-modal multi-agent genetic algorithm is proposed for learning FCM, termed as NMMMAGA-FCM. In this paper, NMMMAGA-FCM is adopted to learn several FCMs at the same time, then chooses the optimal FCM from all candidates. In the experiments, NMMMAGA-FCM is applied to learn the FCMs from synthetic data with varying sizes and densities. The experimental results show that NMMMAGA-FCM can learn FCMs with high accuracy. In addition, NMMMAGA-FCM is validated on the benchmark datasets DREAM3 and DREAM4. The experimental results show that NMMMAGA-FCM outperforms other learning algorithms obviously, which illustrates that NMMMAGA-FCM can reconstruct gene regulatory networks (GRNs) effectively.Graphical abstractGraphical abstract for this article
       
  • Ensemble of surrogates assisted particle swarm optimization of medium
           scale expensive problems
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Fan Li, Xiwen Cai, Liang Gao Surrogate-assisted evolutionary algorithms have been commonly used in extremely expensive optimization problems. However, many existing algorithms are only effectively used in low-dimensional optimization problems and are easily trapped into a local optimum for medium scaled complex problems. An ensemble of surrogates assisted particle swarm optimization (EAPSO) algorithm is proposed in this paper to address this challenge. EAPSO algorithm uses multiple trial positions for each particle in the swarm and selects the promising positions by using the superiority and uncertainty of the ensemble simultaneously. Besides, a novel variable weight coefficient based on evolutionary state is proposed to balance exploration and exploitation. For faster convergence and avoiding wrong global attraction of models, optima of two surrogates (polynomial regression model and radial basis function model) are evaluated in the convergence state of particles. The strategies ensure large exploration space for the swarm and control the time to converge. Forty-two benchmark functions widely adopted in literatures are used to evaluate the proposed approach. Experimental results demonstrate the superiority of EAPSO algorithm compared with three popular algorithms.
       
  • MCEDA: A novel many-objective optimization approach based on model and
           clustering
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Xiaoxu Duan To solve many-objective optimization problems (MaOPs) by evolutionary algorithms (EAs), the maintenance of convergence and diversity is essential and difficult. Improved multi-objective optimization evolutionary algorithms (MOEAs), usually based on the genetic algorithm (GA), have been applied to MaOPs, which use the crossover and mutation operators of GAs to generate new solutions. In this paper, a new approach, based on decomposition and the MOEA/D framework, is proposed: model and clustering based estimation of distribution algorithm (MCEDA). MOEA/D means the multi-objective evolutionary algorithm based on decomposition. The proposed MCEDA is a new estimation of distribution algorithm (EDA) framework, which is intended to extend the application of estimation of distribution algorithm to MaOPs. MCEDA was implemented by two similar algorithm, MCEDA/B (based on bits model) and MCEDA/RM (based on regular model) to deal with MaOPs. In MCEDA, the problem is decomposed into several subproblems. For each subproblem, clustering algorithm is applied to divide the population into several subgroups. On each subgroup, an estimation model is created to generate the new population. In this work, two kinds of models are adopted, the new proposed bits model and the regular model used in RM-MEDA (a regularity model based multi-objective estimation of distribution algorithm). The non-dominated selection operator is applied to improve convergence. The proposed algorithms have been tested on the benchmark test suite for evolutionary algorithms (DTLZ). The comparison with several state-of-the-art algorithms indicates that the proposed MCEDA is a competitive and promising approach.
       
  • Comparison of common machine learning models for classification of
           tuberculosis using transcriptional biomarkers from integrated datasets
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Carly A. Bobak, Alexander J. Titus, Jane E. Hill Tuberculosis (TB) is a top-10 cause of death worldwide, and new diagnostics are a key element of the World Health Organization’s End TB strategy. Significant research efforts have gone into trying to identify a transcriptional signature from patient blood in order to diagnose TB, but a consistent signature for heterogeneous populations has remained elusive. In this work, we propose a data analysis framework which directly integrates multiple publicly-available expression array datasets in order to identify a more reliable gene signature for the diagnosis of TB. The proposed method was built using 4 distinct datasets spanning a total of 1164 samples and 4 countries. The performance and selected gene features of three different machine learning classifiers were compared in the context of this multi-cohort framework. A Random Forest classifier provided the best classification results, with an AUC of 0.8646 in our validation data. Gene ontology enrichment analysis revealed that the selected gene features across the three models are all related to immunological processes.Graphical abstractGraphical abstract for this article
       
  • Fuzzy granulation and constraint neighbourhood granulation structure for
           object classification in unevenly illuminated images
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Mamata Wagh, Pradipta Kumar Nanda Classification of objects and background in a complex scene is a challenging early vision problem. Specifically, the problem is compounded under poor illumination conditions. In this paper, the problem of object and background classification has been addressed under unevenly illuminated conditions. The challenge is to extract the actual boundary under poorly illuminated portion. This has been addressed using the notion of rough sets and granular computing. In order to take care of differently illuminated portions over the image, adaptive window growing approach has been employed to partition the image into different windows and optimum threshold for classification over a given window has been determined by the four proposed granular computing based schemes. Over a particular window, illumination varies significantly posing a challenge for classification. In order to deal with these issues, we have proposed four schemes based on heterogeneous and non-homogeneous granulation. They are; (i) Heterogeneous Granulation based Window Growing (HGWG), (ii) Empirical Non-homogeneous Granulation based Window Growing (ENHWG), (iii) Fuzzy Gradient Non-homogeneous based Window Growing (FNHWG), (iv) Fuzzy Gradient Non-homogeneous Constrained Neighbourhood based Window Growing (FNHCNGWG). The proposed schemes have been tested with images from Berkeley image database, specifically with unevenly illuminated images having single and multiple objects. The performance of the proposed schemes has been evaluated based on four metrics. The performance of the FNHWG and FNHCNGWG schemes has been compared with Otsu, K-means, FCM, PCM, Pal’s method, HGWG, ENHWG and Fuzzy Non-homogeneous Neighbourhood based Window growing (FNHNGWG) schemes and found to be superior to the existing ones.
       
  • Repository and Mutation based Particle Swarm Optimization (RMPSO): A new
           PSO variant applied to reconstruction of Gene Regulatory Network
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Biswajit Jana, Suman Mitra, Sriyankar Acharyya Particle Swarm Optimization (PSO) is a meta-heuristic approach based on swarm intelligence, which is inspired by the social behaviour of bird flocking or fish schooling. The main disadvantage of the basic PSO is that it suffers from premature convergence. To prevent the process of search from premature convergence as well as to improve the exploration and exploitation capability as a whole, here, in this paper, a modified variant, named Repository and Mutation based PSO (RMPSO) is proposed. In RMPSO variant, apart from applying five-staged successive mutation strategies for improving the swarm best as referred in Enhanced Leader PSO (ELPSO), two extra repositories have been introduced and maintained to store personal best and global best solutions having same fitness values. In each step, the personal and global best solutions are chosen randomly from their respective repositories which enhance exploration capability further, retaining the exploitation capability. The computational experiment on benchmark problem instances shows that in most of the cases, RMPSO performs better than other algorithms in terms of the statistical metrics taken into account. Moreover, the performance of the proposed algorithm remains consistent in most of the cases when the dimension of the problem is scaled up. RMPSO is further applied to a practical scenario: the reconstruction of Gene Regulatory Networks (GRN) based on Recurrent Neural Network (RNN) model. The experimental results ensure that the RMPSO performs better than the state-of-the-art methods in the synthetic gene data set (gold standard) as well as real gene data set.
       
  • A comparative analysis of speech signal processing algorithms for
           Parkinson’s disease classification and the use of the tunable Q-factor
           wavelet transform
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): C. Okan Sakar, Gorkem Serbes, Aysegul Gunduz, Hunkar C. Tunc, Hatice Nizam, Betul Erdogdu Sakar, Melih Tutuncu, Tarkan Aydin, M. Erdem Isenkul, Hulya Apaydin In recent years, there has been increasing interest in the development of telediagnosis and telemonitoring systems for Parkinson’s disease (PD) based on measuring the motor system disorders caused by the disease. As approximately 90% percent of PD patients exhibit some form of vocal disorders in the earlier stages of the disease, the recent PD telediagnosis studies focus on the detection of the vocal impairments from sustained vowel phonations or running speech of the subjects. In these studies, various speech signal processing algorithms have been used to extract clinically useful information for PD assessment, and the calculated features were fed to learning algorithms to construct reliable decision support systems. In this study, we apply, to the best of our knowledge for the first time, the tunable Q-factor wavelet transform (TQWT) to the voice signals of PD patients for feature extraction, which has higher frequency resolution than the classical discrete wavelet transform. We compare the effectiveness of TQWT with the state-of-the-art feature extraction methods used in diagnosis of PD from vocal disorders. For this purpose, we have collected the voice recordings of 252 subjects in the context of this study and extracted multiple feature subsets from the voice recordings. The feature subsets are fed to multiple classifiers and the predictions of the classifiers are combined with ensemble learning approaches. The results show that TQWT performs better or comparable to the state-of-the-art speech signal processing techniques used in PD classification. We also find that Mel-frequency cepstral and the tunable-Q wavelet coefficients, which give the highest accuracies, contain complementary information in PD classification problem resulting in an improved system when combined using a filter feature selection technique.
       
  • Islanding detection of distributed generation by using multi-gene genetic
           programming based classifier
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Emerson Carlos Pedrino, Thiago Yamada, Thiago Reginato Lunardi, José Carlos de Melo Vieira This paper proposed a new method for detecting islanding of distributed generation (DG), using Multi-gene Genetic Programming (MGP). Islanding has been a serious concern among power distribution utilities and distributed generation owners, because it poses risks to the safety of utilities’ workers and consumers, and can cause damage to power distribution systems’ equipment. Therefore, a DG must be disconnected as soon as an islanding is detected. In addition, an islanding detection method must have high degree of dependability to correctly discriminate islanding from other events, such as load switching, in order to avoid unnecessary disconnection of the distributed generator. In this context, the novelty of the proposed method is that the MGP is capable of obtaining a set of mathematical and logic functions employed to detect and classify islanding correctly. This is a new approach among the computational intelligent methods proposed for DG islanding detection. The main idea was to use local voltage measurements as input of the method, eliminating the need of complex and expensive communication infrastructure. The method has been trained with several islanding and non-islanding cases, by using a power distribution system comprising five concentrated loads, a synchronous distributed generator and a wind power plant. The results showed that the proposed method was successful in differentiating the islanding events from other disturbances, revealing its great potential to be applied in anti-islanding protection schemes for distributed generation.
       
  • Applying multi-objective ant colony optimization algorithm for solving the
           unequal area facility layout problems
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Jingfa Liu, Jun Liu The unequal area facility layout problem (UA-FLP) which deals with the layout of departments in a facility comprises of a class of extremely difficult and widely applicable multi-objective optimization problems with constraints arising in diverse areas and meeting the requirements for real-world applications. Based on the heuristic strategy, the problem is first converted into an unconstrained optimization problem. Then, we use a modified version of the multi-objective ant colony optimization (MOACO) algorithm which is a heuristic global optimization algorithm and has shown promising performances in solving many optimization problems to solve the multi-objective UA-FLP. In the modified MOACO algorithm, the ACO with heuristic layout updating strategy which is proposed to update the layouts and add the diversity of solutions is a discrete ACO algorithm, with a difference from general ACO algorithms for discrete domains which perform an incremental construction of solutions but the ACO in this paper does not. We propose a novel pheromone update method and combine the Pareto optimization based on the local pheromone communication and the global search based on the niche technology to obtain Pareto-optimal solutions of the problem. In addition, the combination of the local search based on the adaptive gradient method and the heuristic department deformation strategy is applied to deal with the non-overlapping constraint between departments so as to obtain feasible solutions. Ten benchmark instances from the literature are tested. The experimental results show that the proposed MOACO algorithm is an effective method for solving the UA-FLP.
       
  • Cross-calibration of categorical variables: An evaluation of the genetic
           algorithm approach
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Rym M’Hallah, Suja Aboukhamseen This paper highlights the importance of the cross-calibration of categorical variables, models cross-calibration as the forecast of a joint probability distribution, and proposes a non-traditional method that can be applied to any observed sample of joint data points. The sample is generally distorted due to measurement errors and differences among raters. The approach uses a genetic algorithm that predicts the true joint probability of two categorical variables. Unlike existing methods, the proposed approach does not explicitly account for any prior knowledge, does not impose any constraint, does not define a specific agreement, and does not specify the type of dependence that exists between the variables. However, the approach produces good logical estimates of the probability forecast both at a specific point in time and longitudinally across time. The computational investigation quantifies this performance using different scoring measures and provides computational evidence of its validity and superiority.Graphical abstractGraphical abstract for this article
       
  • Real time traffic delay optimization using shadowed type-2 fuzzy rule base
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Kajal Chatterjee, Arkajyoti De, Felix T.S. Chan Traffic networks are getting big and complex day by day with a rapid traffic growth. Existing Type-2 (T2) fuzzy logic works well in optimizing the waiting time of traffic at a big junction, but the rule base of T2 fuzzy logic is heavily dependent on previous traffic data, rather than real-time data. Moreover, it fails in changing and updating the waiting time in any junction with a high rate of traffic. In addition, very big junctions contain dynamic traffic data that is characterized by a high level of uncertainty, which is difficult to be handled by type-2 fuzzy logic. To cope with this situation, Shadowed Type-2 (ST2) fuzzy logic is proposed as it works well in the domain having very clumsy and uncertain data. It increases the uncertainty of a fuzzy set by partitioning it into different region. Thus, based on ST2 fuzzy rule base, a ST2 fuzzy waiting time simulator is created, whose output is implemented in a proposed real-time traffic-based Time Optimized Shortest Path (TOSP) model. It helps in structuring the optimized time path from one location to another. This can be done by taking real time traffic data from the upcoming junction, calculating the waiting time using ST2 fuzzy rule base, and finally directing the vehicle to take its optimized path, which results in a reduction in the overall waiting time of each junction. To demonstrate the superiority of the proposed model, a case study of a multi-directional (six directional) junction is presented. Success of this model easies the process of proposing it as a mobile application, which can help in reducing the waiting time in junctions of metropolitan areas.
       
  • On the optimal placement of cameras for surveillance and the underlying
           set cover problem
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Julien Kritter, Mathieu Brévilliers, Julien Lepagnot, Lhassane Idoumghar Given a delimited surveillance area, represented in either 2D or 3D, and a set of feasible camera locations and orientations, the optimal camera placement problem (OCP) is that of identifying camera configurations which optimise a predefined objective under a set of application-specific constraints. While this problem appears in many modern applications such as area surveillance, target tracking, photogrammetry or traffic and crowd analysis, it finds its roots in one of Karp’s 21 original NP-complete problems: set cover (SCP). If the two problems are often structurally identical, many papers dealing with optimal camera placement use this relationship implicitly, and a lot of the work done on the SCP has yet to find its way into the literature of the applied problem. In this survey, we review both in terms of problem modelling, preprocessing and solving approaches, and attempt to bring them together by suggesting open lines of research for future works.
       
  • Enhancing the Lithium-ion battery life predictability using a hybrid
           method
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Ling-Ling Li, Zhi-Feng Liu, Ming-Lang Tseng, Anthony S.F. Chiu This study contributes to proposing the improved bird swarm algorithm optimization least squares support vector machine (IBSA-LSSVM) model to predict the remaining life of lithium-ion batteries. By improving the prediction accuracy of the model, the safety and reliability of the new energy storage system are improved. In order to avoid the bird swarm algorithm (BSA) getting into the local optimal solution, the levy flight strategy is introduced into the improved bird swarm algorithm (IBSA), which improves the convergence performance of the algorithm. Hence, this study is to verify the effectiveness of the proposed hybrid IBSA-LSSVM model. The following work has been done: (1) test functions are used to test particle swarm optimization (PSO), differential evolution algorithm (DE), BSA and IBSA; (2) the back propagation neural network (BP) model, support vector machine (SVM) model, quantum particle swarm optimization support vector machine (QPSO-SVM) model, BSA-LSSVM model and IBSA-LSSVM model are tested with the B5, B6 and B18 batteries. The following findings are obtained: (1) the five test functions are used to test the PSO, DE, BSA and IBSA algorithms in 20 dimensions, 50 dimensions and 80 dimensions. The results show that the convergence accuracy and convergence stability of IBSA algorithm is higher than those of the other three algorithms; (2) the residual life of B5, B6 and B18 batteries are predicted by the BSA-LSSVM, SVM, QPSO-SVM, BP and IBSA-LSSVM models. The test results show that the root mean square error of the IBSA-LSSVM model for B5 battery is 0.01, the root mean square error for B6 battery is 0.06, and the root mean square error for B18 battery is 0.02. The results show that the prediction accuracy of proposed model is higher than that of the other models.
       
  • Genetic algorithm-based optimisation of load-balanced routing for AMI with
           wireless mesh networks
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): A. Robert singh, D. Devaraj, R. Narmatha Banu The advanced metering infrastructure (AMI) in a smart grid contains hardware, software, and other electronic components connected through a communication infrastructure. AMI transfers meter-reading data between a group of smart meters and a utility centre. Herein, a wireless mesh network (WMN) with a random mesh topology is used to deploy the AMI communication network. In a WMN, paths are identified using a hybrid wireless mesh routing protocol (HWMP) with a load balancing feature called load aware-HWMP (LA-HWMP). These paths reduce the demand on links with a minimal air time metric; however, the delay in the data transmission of certain smart meters is high, given the large number of retransmissions caused by packet drop. To avert this problem and enhance the end-to-end delay, a genetic algorithm is applied on the LA-HWMP to obtain the optimal path. The optimisation process will result in the selection of paths with minimal delay. The genetic algorithm is developed with a rank-based selection, a two-point crossover, and a random reset mutation with a repair function to eliminate duplicate entries. The proposed method is compared with the HWMP, the LA-HWMP, and a state-of-the-art method that uses a combination of the ant colony algorithm and simulated annealing (ACA-SA) for AMI networks of different sizes. The obtained results show that the path identified by the proposed method yields a shorter delay and higher throughput than paths identified using the other methods.
       
  • A new optimal feature selection scheme for classification of power quality
           disturbances based on ant colony framework
    • Abstract: Publication date: Available online 24 October 2018Source: Applied Soft ComputingAuthor(s): Utkarsh Singh, Shyam Narain Singh The optimal feature selection problem in classification of power quality disturbances is concerned with the elimination of redundant features. Its primary objective is to improve the classification accuracy and computational time. Limited literature on this topic and possible inefficiency in the existing works, makes it imperative to explore the new feature selection techniques. This paper, thus presents an ant colony optimization model for the same. A new multiobjective feature selection scheme is proposed which minimizes the product of feature set size and classification error. A feature set is declared feasible, if and only if it offers lesser error and features than other sets. The proposed scheme is compared against two other schemes from literature in same optimization framework. S-transform and time-time transform are employed for detection and feature extraction. This combination has a proven ability of offering precise time-frequency details and a wide range of features for optimal selection problem. Three classifiers, namely decision tree, k-nearest neighbour, and support vector machine are used to check the generality of proposed methodology. Disturbances simulated as per IEEE-1159 are used for initial testing, which confirm that the previous schemes may render accuracies even worse than those obtained without optimal feature selection. Conversely, the proposed scheme ensures a remarkable accuracy even when validated with real disturbances from an experimental database. A performance comparison with some recent works is also shown.
       
  • Classification of mice hepatic granuloma microscopic images based on a
           deep convolutional neural network
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Yu Wang, Yating Chen, Ningning Yang, Longfei Zheng, Nilanjan Dey, Amira S. Ashour, V. Rajinikanth, João Manuel R.S. Tavares, Fuqian Shi Hepatic granuloma develops in the early stage of liver cirrhosis which can seriously injury liver health. At present, the assessment of medical microscopic images is necessary for various diseases and the exploiting of artificial intelligence technology to assist pathology doctors in pre-diagnosis is the trend of future medical development. In this article, we try to classify mice liver microscopic images of normal, granuloma-fibrosis1 and granuloma-fibrosis2, using convolutional neural networks (CNNs) and two conventional machine learning methods: support vector machine (SVM) and random forest (RF). On account of the included small dataset of 30 mice liver microscopic images, the proposed work included a preprocessing stage to deal with the problem of insufficient image number, which included the cropping of the original microscopic images to small patches, and the disorderly recombination after cropping and labeling the cropped patches In addition, recognizable texture features are extracted and selected using gray the level co-occurrence matrix (GLCM), local binary pattern (LBP) and Pearson correlation coefficient (PCC), respectively. The results established a classification accuracy of 82.78% of the proposed CNN based classifiers to classify 3 types of images. In addition, the confusion matrix figures out that the accuracy of the classification results using the proposed CNNs based classifiers for the normal class, granuloma-fibrosis1, and granuloma-fibrosis2 were 92.5%, 76.67%, and 79.17%, respectively. The comparative study of the proposed CNN based classifier and the SVM and RF proved the superiority of the CNNs showing its promising performance for clinical cases.
       
  • Modified water cycle algorithm for optimal direction overcurrent relays
           coordination
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Ahmed Korashy, Salah Kamel, Abdel-Raheem Youssef, Francisco Jurado The optimization model of Directional Over Current Relays (DOCRs) coordination is considered non-linear optimization problem with a large number of operating constraints. This paper proposes a modified version for Water Cycle Algorithm (WCA), referred to as MWCA to effectively solve the optimal coordination problem of DOCRs. The main goal is to minimize the summation of operating times of all relays when they act as primary protective devices. The operating time of a relay depends on time dial setting and pickup current setting or plug setting, which they are considered as decision variables. In the proposed technique, the search space has been reduced by increasing the C-value of traditional WCA, which effects on the balance between explorative and exploitative phases, gradually during the iterative process in order to find the global minimum. The performance of proposed algorithm is assessed using standard test systems; 8-bus, 9-bus, 15-bus, and 30-bus. The obtained results by the proposed algorithm are compared with those obtained by other well-known optimization techniques. In addition, the proposed algorithm has been validated using benchmark DIgSILENT PowerFactory. The results show the effectiveness and superiority of the proposed algorithm to solve DOCRs coordination problem, compared with traditional WCA and other optimization techniques.Graphical abstractGraphical abstract for this article
       
  • Driver identification based on hidden feature extraction by using adaptive
           nonnegativity-constrained autoencoder
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): Jie Chen, ZhongCheng Wu, Jun Zhang In this paper, we propose a new driver identification method using deep learning. Existing driver identification methods have the disadvantages that the size of the sliding time window is too large and the feature extraction is relatively subjective, which leads to low identification accuracy and long prediction time. We first propose using an unsupervised three-layer nonnegativity-constrained autoencoder to adaptive search the optimal size of the sliding window, then construct a deep nonnegativity-constrained autoencoder network to automatically extract hidden features of driving behavior to further complete driver identification. The results from the public driving behavior dataset indicate that relative to conventional sparse autoencoder, dropout-autoencoder, random tree, and random forest algorithms, our method can effectively search the optimal size of the sliding time window, and the window size is shortened from the traditional 60s to 30s, which can better preserve the intrinsic information of the data while greatly reducing the data volume. Furthermore, our method can extract more distinctive hidden features that aid the classifier to map out the separating boundaries among the classes more easily. Finally, our method can significantly shorten the prediction time and improve the timeliness under the premise of improving the driver identification performance and reducing the model overfitting.Graphical abstractGraphical abstract for this article
       
  • The value of big data for credit scoring: Enhancing financial inclusion
           using mobile phone data and social network analytics
    • Abstract: Publication date: January 2019Source: Applied Soft Computing, Volume 74Author(s): María Óskarsdóttir, Cristián Bravo, Carlos Sarraute, Jan Vanthienen, Bart Baesens Credit scoring is without a doubt one of the oldest applications of analytics. In recent years, a multitude of sophisticated classification techniques have been developed to improve the statistical performance of credit scoring models. Instead of focusing on the techniques themselves, this paper leverages alternative data sources to enhance both statistical and economic model performance. The study demonstrates how including call networks, in the context of positive credit information, as a new Big Data source has added value in terms of profit by applying a profit measure and profit-based feature selection. A unique combination of datasets, including call-detail records, credit and debit account information of customers is used to create scorecards for credit card applicants. Call-detail records are used to build call networks and advanced social network analytics techniques are applied to propagate influence from prior defaulters throughout the network to produce influence scores. The results show that combining call-detail records with traditional data in credit scoring models significantly increases their performance when measured in AUC. In terms of profit, the best model is the one built with only calling behavior features. In addition, the calling behavior features are the most predictive in other models, both in terms of statistical and economic performance. The results have an impact in terms of ethical use of call-detail records, regulatory implications, financial inclusion, as well as data sharing and privacy.
       
  • Guided Stochastic Gradient Descent Algorithm for inconsistent datasets
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Anuraganand Sharma Stochastic Gradient Descent (SGD) Algorithm, despite its simplicity, is considered an effective and default standard optimization algorithm for machine learning classification models such as neural networks and logistic regression. However, SGD’s gradient descent is biased towards the random selection of a data instance. In this paper, it has been termed as data inconsistency. The proposed variation of SGD, Guided Stochastic Gradient Descent (GSGD) Algorithm, tries to overcome this inconsistency in a given dataset through greedy selection of consistent data instances for gradient descent. The empirical test results show the efficacy of the method. Moreover, GSGD has also been incorporated and tested with other popular variations of SGD, such as Adam, Adagrad and Momentum. The guided search with GSGD achieves better convergence and classification accuracy in a limited time budget than its original counterpart of canonical and other variation of SGD. Additionally, it maintains the same efficiency when experimented on medical benchmark datasets with logistic regression for classification.
       
  • Time-series clustering based on linear fuzzy information granules
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Lingzi Duan, Fusheng Yu, Witold Pedrycz, Xiao Wang, Xiyang Yang In this paper, time-series clustering is discussed. At first ℓ1 trend filtering method is used to produce an optimal segmentation of time series. Next optimized fuzzy information granulation is completed for each segment to form a linear fuzzy information granule, which includes both average and trend information. Once the optimal segmentation and granulation have been completed, the original time series is transformed into a granular time series. To finalize time-series clustering, a distance measure for granular time series is established, and a linear fuzzy information granule-based dynamic time warping (LFIG_DTW) algorithm is developed for calculating the distance of two equal-length or unequal-length granular time series. Furthermore, the distance realized by the LFIG_DTW algorithm can detect not only the increasing or decreasing trends, but also the changing periods and rates of changes. After calculating all the distances between any two granular time series, a LFIG_DTW distance-based hierarchical clustering method is designed for time-series clustering. Experiment results involving several real datasets show the effectiveness of the proposed method.
       
  • A decomposition-based multiobjective evolutionary algorithm with
           angle-based adaptive penalty
    • Abstract: Publication date: Available online 23 October 2018Source: Applied Soft ComputingAuthor(s): Junfei Qiao, Hongbiao Zhou, Cuili Yang, Shengxiang Yang A multiobjective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multiobjective optimization problem (MOP) into a number of scalar optimization subproblems and optimizes them in a collaborative manner. In MOEA/D, decomposition mechanisms are used to push the population to approach the Pareto optimal front (POF), while a set of uniformly distributed weight vectors are applied to maintain the diversity of the population. Penalty-based boundary intersection (PBI) is one of the approaches used frequently in decomposition. In PBI, the penalty factor plays a crucial role in balancing convergence and diversity. However, the traditional PBI approach adopts a fixed penalty value, which will significantly degrade the performance of MOEA/D on some MOPs with complicated POFs. This paper proposes an angle-based adaptive penalty (AAP) scheme for MOEA/D, called MOEA/D-AAP, which can dynamically adjust the penalty value for each weight vector during the evolutionary process. Six newly designed benchmark MOPs and an MOP in the wastewater treatment process are used to test the effectiveness of the proposed MOEA/D-AAP. Comparison experiments demonstrate that the AAP scheme can significantly improve the performance of MOEA/D.
       
  • A new Fuzzy methodology-based structured framework for RAM and risk
           analysis
    • Abstract: Publication date: Available online 23 October 2018Source: Applied Soft ComputingAuthor(s): Dilbagh Panchal, Anupam K. Singh, Prasenjit Chatterjee, Edmundas Kazimieras Zavadskas, Mehdi Keshavarz Ghorabaee The aim of this paper is to propose a new hybridized framework for analyzing the performance issues of a chemical process plant by utilizing uncertain, imprecise and vague information. In the proposed framework, Fuzzy Lambda Tau (FLT) approach has been used for computing reliability, availability and maintainability (RAM) parameters of the considered system. Further, for enhancing the RAM characteristics of the system, improved Fuzzy Failure Mode Effect Analysis (FMEA) approach is adopted. Under improved Fuzzy FMEA approach, defined Fuzzy linguistic rating values in the form of triangular and trapezoidal Fuzzy numbers have been assigned by the experts to each risk factor of the listed failure causes. The proposed framework is demonstrated with an industrial application in a chlorine production plant of a chemical process industry. The results show decreasing trend for system availability and deposition of solid Nacl, mechanical failure, corrosion due to wet chlorine, scanty lubrication, improper adsorption and valve malfunctioning are identified as the most critical failure causes for the considered system. A comparative performance analysis between the proposed framework, Fuzzy technique for order of preference by similarity to ideal solution (Fuzzy TOPSIS) and Fuzzy evaluation based on distance from average solution (Fuzzy EDAS) are then carried out to show the competence of the proposed framework. It is expected that the analytical results would be highly useful in formulating an optimal maintenance policy for such complex systems and may also be used for improving performance of similar plants.Graphical abstractGraphical abstract for this article
       
  • Multi-Objective Artificial Bee Colony for designing multiple genes
           encoding the same protein
    • Abstract: Publication date: Available online 17 October 2018Source: Applied Soft ComputingAuthor(s): Belen Gonzalez-Sanchez, Miguel A. Vega-Rodríguez, Sergio Santander-Jiménez, José M. Granado-Criado The improvement of protein expression levels represents one of the most important goals in synthetic biology. In order to accomplish it, a promising and widely-used strategy lies on integrating multiple genes that encode the subject protein into an organism genome. This important task, however, is affected by several challenging issues. Firstly, the integration of highly similar sequences can potentially induce homologous recombination, a negative effect that implies a reduction in the number of genes effectively integrated. This is the reason why it is important to design multiple protein-coding sequences (also named CDSs) that are as different as possible, between both different CDSs and different subsequences within the same CDS. Additionally, codon usage frequencies in these CDSs should be as highly adapted to the organism as possible. Therefore, this task involves different and conflicting objectives that must be optimized, thus being suitable to be tackled as a multi-objective optimization problem. In this work, we design and implement the algorithm MOABC (Multi-Objective Artificial Bee Colony) to solve the problem of designing multiple CDSs that encode the same protein, considering three objectives to be optimized. The experimental evaluation herein performed suggests that MOABC is able to obtain relevant results, showing statistically significant improvements over the ones found in the literature.
       
  • Picture fuzzy Dombi aggregation operators: Application to MADM process
    • Abstract: Publication date: Available online 16 October 2018Source: Applied Soft ComputingAuthor(s): Chiranjibe Jana, Tapan Senapati, Madhumangal Pal, Ronald R. Yager The operations of t-norm and t-conorm, developed by Dombi, were generally known as Dombi operations which may have a better expression of application if they are presented in a new form of flexibility within the general parameter. To the best of our knowledge, Dombi operations are yet to be applied in exhibiting picture fuzzy sets in an appropriate form and shape. Inspired by the established theory of Dombi operations and by applying picture fuzzy numbers (PFNs), in the present study, we propose picture fuzzy Dombi weighted average (PFDWA) operator, picture fuzzy Dombi order weighted average (PFDOWA) operator, picture fuzzy Dombi hybrid weighted average (PFDHWA) operator, picture fuzzy Dombi weighted geometric (PFDWG) operator, picture fuzzy Dombi order weighted geometric (PFDOWG) operator and picture fuzzy Dombi hybrid weighted geometric (PFDHWG) operator in a new light. These aforesaid operators are enormously used in the operational parameter to help a successful solution of the given problem. Here in our endeavored study, we develop a model for picture fuzzy Dombi aggregation operators to solve multiple attribute decision making (MADM) methods in an updated way. And at the end of our study a practical application of the deducted decision over investment alternatives is reported in accordance with the enjoyment or utility of the proposed work.
       
  • On fuzzy approaches for enlarging skyline query results
    • Abstract: Publication date: Available online 15 October 2018Source: Applied Soft ComputingAuthor(s): Djamal Belkasmi, Allel Hadjali, Hamid Azzoune In the last decade, skyline queries have gained much attention and are proved to be valuable for multi-criteria decisions. Based on the concept of Pareto dominance, they return the non-dominated points, called the skyline points. In practice, it may happen that the skyline only contains a small number of points which could be insufficient for the user needs. In this paper, we discuss two fuzzy-set-based approaches to enriching the small skyline with particular points that could serve the decision makers’ needs. The basic idea consists in identifying the most interesting points among the non-skyline ones. On the one hand, we introduce a novel fuzzy dominance relationship which makes more demanding the dominance between the points of interest. So, much points would be considered as incomparable and then as elements of the new relaxed skyline. On the other hand, we leverage an appropriate fuzzy closeness relation to retrieve non skyline points that are fuzzily close to some skyline points. Furthermore, we develop efficient algorithms to compute the relaxed variants of skyline. Extensive experiments are conducted to demonstrate the effectiveness of our approaches and analyze the performance of the proposed algorithms. A comparative study between the approaches presented is made as well.
       
  • MLBC: Multi-objective Load Balancing Clustering technique in Wireless
           Sensor Networks
    • Abstract: Publication date: Available online 15 October 2018Source: Applied Soft ComputingAuthor(s): Sukhchandan, Sushma Jain In Wireless Sensor Networks (WSN) there are scenarios with conflicting objectives. This needs to be modelled as multi-objective optimization formulations. The optimization problem changes with the nature of application, network scenario and input/output parameters. Depending upon the underlying application requirements, there is a need of an appropriate Multi-Objective Optimization technique to manage these multiple conflicting objectives simultaneously and to yield an overall optimal solution. In this paper, novel Multi-objective Load Balancing Clustering (MLBC) technique is proposed by utilizing Multi Objective Particle Swarm Optimization (MOPSO). Two objective functions are defined: Energy Efficiency and Reliability. The energy efficiency is based on the average residual energy of Cluster Heads (CHs), whereas the reliability is based on the communication cost of inter-cluster routing. The nodes transmit their information to Cluster Head or Base Station in single-hop or multi-hop manner. A healing function is utilized to avoid loops in the generated path. The load balancing is performed by shuffling the roles of next hop node and Cluster Head in each iteration. The best compromise solution is selected through the fuzzy decision-based approach. The performance of the proposed and existing approaches has been evaluated in terms of energy efficiency, network lifetime, packet delivery ratio, data accuracy and number of active nodes. Apart from these parameters, coverage and scalability are also considered to evaluate the quality of solutions provided by multi-objective optimization approach.
       
  • Scaled and oriented object tracking using ensemble of multilayer
           perceptrons
    • Abstract: Publication date: Available online 9 October 2018Source: Applied Soft ComputingAuthor(s): Ajoy Mondal, Ashish Ghosh, Susmita Ghosh Major challenging problems in the field of moving object tracking are to handle changing in scale and orientation, background clutter and large variation in pose with occlusion. This article presents an algorithm to track moving object under such complex environment. Here, a discriminative model based on an ensemble of multilayer perceptrons (MLPs) is proposed to detect object from its cluttered background. Orientation and enhanced scale of the detected object is estimated using binary moments. Here, the problem of object tracking is posed as a constrained optimization with respect to location, scale and orientation of the object. Two different heuristics based on support value and confidence score are proposed to reduce drift and to detect full occlusion. Three benchmark datasets are considered for the experimental purpose and the proposed algorithm attains state-of-the-art performance under various conditions.Graphical abstractGraphical abstract for this article
       
  • Evolutionary multiobjective multiple description wavelet based image
           coding in the presence of mixed noise in images
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Huseyin Kusetogullari, Amir Yavariabdi In this paper, a novel method for generation of multiple description (MD) wavelet based image coding is proposed by using Multi-Objective Evolutionary Algorithms (MOEAs). Complexity of the multimedia transmission problem has been increased for MD coders if an input image is affected by any type of noise. In this case, it is necessary to solve two different problems which are designing the optimal side quantizers and estimating optimal parameters of the denoising filter. Existing MD coding (MDC) generation methods are capable of solving only one problem which is to design side quantizers from the given noise-free image but they can fail reducing any type of noise on the descriptions if they applied to the given noisy image and this will cause bad quality of multimedia transmission in networks. Proposed method is used to overcome these difficulties to provide effective multimedia transmission in lossy networks. To achieve it, Dual Tree-Complex Wavelet Transform (DT-CWT) is first applied to the noisy image to obtain the subbands or set of coefficients which are used as a search space in the optimization problem. After that, two different objective functions are simultaneously employed in the MOEA to find pareto optimal solutions with the minimum costs by evolving the initial individuals through generations. Thus, optimal quantizers are created for MDCs generation and obtained optimum parameters are used in the image filter to remove the mixed Gaussian impulse noise on the descriptions effectively. The results demonstrate that proposed method is robust to the mixed Gaussian impulse noise, and offers a significant improvement of optimal side quantizers for balanced MDCs generation at different bitrates.Graphical abstractGraphical abstract for this articleGraphical abstract for this article
       
  • A new evolutionary algorithm with locally assisted heuristic for complex
           detection in protein interaction networks
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Amenah H. Abdulateef, Bara’a A. Attea, Ahmed N. Rashid, Mayyadah Al-Ani The detection of protein complexes is an essential NP-hard problem in protein-protein interaction networks (PPI). Modularity, community score, ratio cut, and internal density are some examples of the state-of-the-art optimization models. The contribution of this paper is to develop a heuristic approach that can serve as a common locally assisted optimization model to reinforce the reliability of all these complex detection models. The foundation of the proposed heuristic approach hypothesizes a possible decomposition of a pair of proteins, according to their interactions, into two different types. A pair of proteins is classified as either intra-delineation pair or inter-delineation pair depending on their topological similarity. Two proteins with high topological similarity are favored to form an intra-delineation structure; otherwise, they can form an inter-delineation pair. Therefore, the detection for a better complex structure should express more intra-delineation pairs and less inter-delineation pairs within complexes while more inter-delineation pairs and less intra-delineation pairs among separate complexes. The proposed heuristic operator is then injected into the framework of single objective and multi-objective evolutionary algorithms (EAs) while existing complex detection models work as EA optimization templates. In the experiments, we analyze the performance of the detection models when applied to the publicly available yeast protein networks. Results give clear argument to the positive impact of the proposed heuristic approach to considerably improve the detection ability of the existing optimization models, and further to provide more competitive results than a recently developed heuristic approach.
       
  • Vector quantization codebook design based on Fish School Search algorithm
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): C.S. Fonseca, F.A.B.S. Ferreira, F. Madeiro Vector Quantization (VQ) has been used in image coding systems since it allows high compression rates. Codebook design can be seen as a high dimensional optimization problem and, in this scenario, swarm intelligence techniques have been used. This paper presents a new VQ codebook design algorithm based on swarm clustering. The method, based on Fish School Search (FSS) algorithm, is introduced. The FSS is embedded in Linde–Buzo–Gray (LBG) algorithm as a swarm clustering method, here called FSS-LBG. Also, a modification in the original FSS breeding operation is proposed in order to favor the exploration ability and, therefore, achieve better results in terms of PSNR of the reconstructed images. Simulation results show gains up to 1.57 dB in terms of PSNR when compared to LBG algorithm for the image Lena at 0.5625 bpp using a codebook of size 512.
       
  • Class-specific kernelized extreme learning machine for binary class
           imbalance learning
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Bhagat Singh Raghuwanshi, Sanyam Shukla Class imbalance problem occurs when the training dataset contains significantly fewer samples of one class (minority-class) compared to another class (majority-class). Conventional extreme learning machine (ELM) gives equal importance to all the samples leading to the results which favor the majority-class. Numerous variants of ELM-like weighted ELM (WELM), class-specific cost regulation ELM (CCR-ELM), class-specific ELM (CS-ELM) etc. have been proposed in order to diminish the performance degradation which happens due to the class imbalance problem. ELM with Gaussian kernel outperforms the ELM with Sigmoid node. This work proposed a novel class-specific kernelized ELM (CSKELM) which is a variant of kernelized ELM to address the class imbalance problem more effectively. CSKELM with Gaussian kernel function avoids the non-optimal hidden node problem associated with CS-ELM and the other existing variants of ELM. This work is distinct from WELM because it does not require the assignment of weights to the training samples. In addition, the proposed work also has considerably lower computational cost in contrast with kernelized WELM. This work employs class-specific regularization in the same way as CS-ELM. This work differs from CS-ELM as the proposed CSKELM uses the Gaussian kernel function to map the input data to the feature space. The proposed work also has lower computational overhead in contrast with kernelized CCR-ELM. The proposed work is assessed by employing benchmark real-world imbalanced datasets downloaded from the KEEL dataset repository. The experimental results indicate the superiority of the proposed work in contrast with the rest of classifiers for the imbalanced classification problems.Graphical abstractGraphical abstract for this article
       
  • Data-driven MIMO model-free reference tracking control with nonlinear
           state-feedback and fractional order controllers
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Mircea-Bogdan Radac, Radu-Emil Precup In this paper we suggest an extension of the Virtual Reference Feedback Tuning (VRFT) framework to nonlinear state-feedback and fractional order (FO) controllers. Theoretical analysis incentivizes the use of VRFT for tuning general nonlinear controllers to achieve model reference matching because it is expected that the more complex controller parameterization of the nonlinear-state-feedback and FO controllers leads to improved control performance. Key factors needed for successful controller tuning are discussed: good exploration of process dynamics depending on careful input excitation signal selection, the influence of the controller parameterization and the selection of the reference model. VRFT is next applied to a Multi Input-Multi Output (MIMO) nonlinear coupled vertical tank system as a case study, to tune MIMO proportional–integral (PI), fractional order-proportional–integral (FO-PI) and neural network state-feedback controllers. PI and FO-PI controllers are tuned in continuous time but implemented in discrete time to enable their real-world applications. Controllers’ complexity vs. control performance trade-off is revealed. For comparisons purposes, an original combination of VRFT and Batch Fitted Q-Learning is employed as a two-step model-free controller tuning procedure for dramatic performance improvement.Graphical abstractGraphical abstract for this article
       
  • Considerations on the principle of rule induction by STRIM and its
           relationship to the conventional Rough Sets methods
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Yuichi Kato, Tetsuro Saeki, Shoutaro Mizuno STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induce if–then rules from the decision table, and its effectiveness has been confirmed by simulation experiments. The method was studied independently of the conventional rough sets methods. This paper summarizes the basic notion of the conventional rule induction methods and newly formulates the idea of STRIM, and then considers the relationship between STRIM and conventional methods, especially VPRS (Variable Precision Rough Set), and shows that STRIM develops the notion of VPRS into a statistical principle. In a simulation experiment, we also consider the condition that STRIM induces the true rules specified in advance.On the other hand, real-world datasets are often small and/or contain missing and contaminated values in the decision table from various reasons. In order to apply STRIM to real-world datasets, we examine the capacity of STRIM in such circumstances by a simulation experiment, after studying the question of what size dataset is required for STRIM. Such studies and examinations are very important to confirm if STRIM is properly applied to real-world datasets, and the results are reasonable.Graphical abstractGraphical abstract for this article
       
  • Application of eXtreme gradient boosting trees in the construction of
           credit risk assessment models for financial institutions
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Yung-Chia Chang, Kuei-Hu Chang, Guan-Jhih Wu The majority of the studies on credit risk assessment models for financial institutions during recent years focus on the improvement of imbalanced data or on the enhancement of classification accuracy with multistage modeling. Whilst multistage modeling and data pre-processing can boost accuracy somewhat, the heterogeneous nature of data may affects the classification accuracy of classifiers. This paper intends to use the classifier, eXtreme gradient boosting tree (XGBoost), to construct a credit risk assessment model for financial institutions. Cluster-based under-sampling is deployed to process imbalanced data. Finally, the area under the receiver operative curve and the accuracy of classifications are the assessment indicators, in the comparison with other frequently used single-stage classifiers such as logistic regression, self-organizing algorithms and support vector machine. The results indicate that the XGBoost classifier used by this paper achieve better results than the other three and can serve as a superior tool for the development of credit risk models for financial institutions.
       
  • On comparing some algorithms for finding the optimal bandwidth in
           Geographically Weighted Regression
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Alan Ricardo da Silva, Felipe Franco Mendes Geographically Weighted Regression (GWR) models are sensitive to the choice of the bandwidth, and this choice is traditionally made through the golden section search algorithm. This algorithm is applied in a function, known as Cross-Validation (CV), which quantifies the efficiency of the model, therefore looking for the optimal parameter that results in the best model. In this paper, the behavior of the CV function was studied, and it was verified that when it is not strictly convex, the golden section search algorithm converges to local minimums instead of the global one. Three algorithms have been used to find the optimal bandwidth: the lightning search algorithm, the harmony search algorithm and an adaptation of the golden section search algorithm. In addition, comparisons were made between them to check the suitability of each one in GWR models. It was found that the golden section search algorithm is not the most adequate in this situation because, in more than one simulation, it resulted in a value too far from the optimal bandwidth. It was also verified that the models with the bandwidth far from the optimal value showed differences in the significance of the parameter estimates compared to the models with the optimal bandwidth.
       
  • Optimizing long short-term memory recurrent neural networks using ant
           colony optimization to predict turbine engine vibration
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): AbdElRahman ElSaid, Fatima El Jamiy, James Higgins, Brandon Wild, Travis Desell This article expands on research that has been done to develop a recurrent neural network (RNN) capable of predicting aircraft engine vibrations using long short-term memory (LSTM) neurons. LSTM RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, making this approach ungeneralizable across multiple engines. In initial work, multiple LSTM RNN architectures were proposed, evaluated and compared. This research improves the performance of the most effective LSTM network design proposed in the previous work by using a promising neuroevolution method based on ant colony optimization (ACO) to develop and enhance the LSTM cell structure of the network. A parallelized version of the ACO neuroevolution algorithm has been developed and the evolved LSTM RNNs were compared to the previously used fixed topology. The evolved networks were trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. Results were obtained using MPI (Message Passing Interface) on a high performance computing (HPC) cluster, evolving 1000 different LSTM cell structures using 208 cores over 21 days. The new evolved LSTM cells showed an improvement of 1.34%, reducing the mean prediction error from 5.61% to 4.27% when predicting excessive engine vibrations 10 s in the future, while at the same time dramatically reducing the number of weights from 21,170 to 13,150. The optimized LSTM also performed significantly better than traditional Nonlinear Output Error (NOE), Nonlinear AutoRegression with eXogenous (NARX) inputs, and Nonlinear Box–Jenkins (NBJ) models, which only reached error rates of 15.73%, 12.06% and 15.05%, respectively. Furthermore, the LSTM regularization method was used to validate the ACO. The ACO LSTM out performed the regularized LSTM by 3.35%. The NOE, NARX, and NBJ models were also regularized for cross validation, and the mean prediction errors were 8.70%, 9.40%, and 9.43% respectively, which gives credit for the ant colony optimized LSTM RNN.
       
  • Estimating unit cost using agent-based fuzzy collaborative intelligence
           approach with entropy-consensus
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Toly Chen In this study, an agent-based fuzzy collaborative intelligence (FCI) approach with entropy as a measure of consensus was proposed for estimating the unit cost of a product, which is a critical task for manufacturers. However, the unit cost of a product declines according to a learning process that involves considerable uncertainty, rendering this task difficult. Although a few FCI methods have been proposed to estimate the unit cost of a product under uncertainty, they are inefficient or based on an insufficient consensus. To resolve these problems and enhance the efficiency of estimating the unit cost of a product, an entropy-consensus agent-based FCI approach was proposed in this study. In the proposed method, an agent autonomously applies one of several mathematical programming methods to model a fuzzy unit cost learning process, which is then used to estimate the unit cost. The fuzzy unit cost estimates by the agents are subsequently aggregated through fuzzy intersection. If aggregation result entropy is higher than a threshold, the agents have an insufficient consensus, and the agents must modify their settings and re-estimate the unit cost. After a consensus has been reached, a back propagation network defuzzifies the aggregation result for deriving a crisp value. The proposed methodology was applied to a dynamic random access memory case. The experimental results indicated that using autonomous agents accelerated collaboration and increased efficiency. Moreover, deriving a representative value only after reaching a consensus was conducive to estimation performance.Graphical abstractGraphical abstract for this article
       
  • Roller bearing fault diagnosis using stacked denoising autoencoder in deep
           learning and Gath–Geva clustering algorithm without principal component
           analysis and data label
    • Abstract: Publication date: December 2018Source: Applied Soft Computing, Volume 73Author(s): Fan Xu, Wai tai Peter Tse, Yiu Lun Tse Most deep learning models such as stacked autoencoder (SAE) and stacked denoising autoencoder (SDAE) are used for fault diagnosis with a data label. These models are applied to extract the useful features with several hidden layers, then a classifier is used to complete the fault diagnosis. However, these fault diagnosis classification methods are only suitable for tagged datasets. Actually, many datasets are untagged in practical engineering. The clustering method can classify data without a label. Therefore, a method based on the SDAE and Gath–Geva (GG) clustering algorithm for roller bearing fault diagnosis without a data label is proposed in this study. First, SDAE is selected to extract the useful feature and reduce the dimension of the vibration signal to two or three dimensions direct without principal component analysis (PCA) of the final hidden layer. Then GG is deployed to identify the different faults. To demonstrate that the feature extraction performance of the SDAE is better than that of the SAE and EEMD with the FE model, the PCA is selected to reduce the dimension of eigenvectors obtained from several previously hidden layers, except for the final hidden layer. Compared with SAE and ensemble empirical mode decomposition (EEMD)-fuzzy entropy (FE) models, the results show that as the number of the hidden layers increases, all the fault samples under different conditions are separated better by using SDAE rather than those feature extraction models mentioned. In addition, three evaluation indicators such as PC, CE, and classification accuracy are used to assess the performance of the method presented. Finally, the results show that the clustering effect of the method presented, and its classification accuracy are superior to those of the other combination models, including the SAE-fuzzy C-means (FCM)/Gustafson–Kessel (GK)/GG and EEMD-fuzzy entropy FE-PCA-FCM/GK/GG.
       
  • Modified firefly algorithm for area estimation and tracking of fast
           expanding oil spills
    • Abstract: Publication date: Available online 29 September 2018Source: Applied Soft ComputingAuthor(s): Abhijit Banerjee, Dipendranath Ghosh, Suvrojit Das Oil Spills are one the major environmental hazards in modern era. It needs to be understood; its origin and trajectory duly analyzed and tracked; so as to confine it in a limited spatial domain of least contamination. In this study, we propose a Modified Firefly Algorithm (PropFA) which is closely based on Firefly Algorithm (Yang, 2008). Unlike Firefly Algorithm, no parameter tuning is required in PropFA as all parameters are dynamically calculated based on present, past and expected objective function values; thus maintaining a proper balance between “exploration” and “exploitation”. The proposed algorithm is evaluated over 18 Classical Benchmark Functions, 14 Benchmark Functions of CEC-2005 (Real-Parameter Optimization) and 28 Benchmark Functions of CEC-2013 (Special Session on Real-Parameter Optimization). PropFA is compared to modern evolutionary and metaheuristic algorithms like PSO, CMA-ES, RC-MEMETIC, EDA etc. The numerical results show that the PropFA enhances the performance of Firefly Algorithm; succesfully avoids local minimas accross dimensions and has a fast convergence. PropFA is then used to model a system for distributed control of swarm UAVs (UnManned Aerial Vehichles) to efficiently search, confine and track the shape and area of irregular dynamic oil spills for fast containment and trajectory estimation. The proposed method is also used to predict boom placement locations around the spill for its efficient confinement. Experimental results revealed that the proposed method as applied on a swarm UAV size of 30 was able to search, fully confine and track an expanding oil spill as large as 16km2 for 9 min.
       
  • Improving performance of classification on incomplete data using feature
           selection and clustering
    • Abstract: Publication date: Available online 29 September 2018Source: Applied Soft ComputingAuthor(s): Cao Truong Tran, Mengjie Zhang, Peter Andreae, Bing Xue, Lam Thu Bui Missing values are an unavoidable issue in many real-world datasets. One of the most popular approaches to classification with incomplete data is to use imputation to replace missing values with plausible values. However, powerful imputation methods are too computationally intensive when applying a classifier to a new unknown instance. This paper proposes new approaches to integrating imputation, clustering and feature selection for classification with incomplete data in order to improve efficiency without loss of accuracy. Clustering is used to reduce the number of instances used by the imputation. Feature selection is used to remove redundant and irrelevant features of training data which greatly reduces the cost of imputation. The paper also investigates the ability of Differential Evolution (DE) to search feature subsets with incomplete data. Results show that the integration of imputation, clustering and feature selection not only improves classification accuracy, but also dramatically reduces the computation time required to estimate missing values when classifying new instances.
       
  • Hybrid soft computing approach based on clustering, rule mining, and
           decision tree analysis for customer segmentation problem: Real case of
           customer-centric industries
    • Abstract: Publication date: Available online 28 September 2018Source: Applied Soft ComputingAuthor(s): Kaveh Khalili-Damghani, Farshid Abdi, Shaghayegh Abolmakarem This paper proposes a hybrid soft computing approach on the basis of clustering, Rule extraction, And decision tree methodology to predict the segment of the new customers in customer-centric companies. in the first module, K-means algorithm is applied to cluster the past customers of company on the basis of their purchase behavior. in the second module, a hybrid feature selection method based on filtering and a multi-attribute decision making method is proposed. finally, On the basis of customers’ characteristics and using decision tree analysis, IF-THEN rules are mined. the proposed approach is applied in two case studies in the field of insurance and telecommunication in order to predict potentially profitable leads and outline the most influential features available to customers in order to perform this prediction. the results validate the efficacy and applicability of proposed approach to handle real-life cases.
       
  • Combining heterogeneous classifiers via granular prototypes
    • Abstract: Publication date: Available online 28 September 2018Source: Applied Soft ComputingAuthor(s): Tien Thanh Nguyen, Mai Phuong Nguyen, Xuan Cuong Pham, Alan Wee-Chung Liew, Witold Pedrycz In this study, a novel framework to combine multiple classifiers in an ensemble system is introduced. Here we exploit the concept of information granule to construct granular prototypes for each class on the outputs of an ensemble of base classifiers. In the proposed method, uncertainty in the outputs of the base classifiers on training observations is captured by an interval-based representation. To predict the class label for a new observation, we first determine the distances between the output of the base classifiers for this observation and the class prototypes, then the predicted class label is obtained by choosing the label associated with the shortest distance. In the experimental study, we combine several learning algorithms to build the ensemble system and conduct experiments on the UCI, colon cancer, and selected CLEF2009 datasets. The experimental results demonstrate that the proposed framework outperforms several benchmarked algorithms including two trainable combining methods, i.e., Decision Template and Two Stages Ensemble System, AdaBoost, Random Forest, L2-loss Linear Support Vector Machine, and Decision Tree.
       
  • An Entropy-based PSO for DAR task scheduling problem
    • Abstract: Publication date: Available online 28 September 2018Source: Applied Soft ComputingAuthor(s): Haowei Zhang, Junwei Xie, Jiaang Ge, Wenlong Lu, Binfeng Zong This paper addresses the task scheduling problem in the digital array radar (DAR), which determines the optimal execution order of all tasks subject to precedence and resource constraints. The aim is to achieve good performance in multiple aspects. To our best knowledge, the existing scheduling algorithms, neglecting the task internal structure, not posed as an optimization model, and only utilizing the heuristic method or the meta-heuristic method to solve the problem, can’t fully give free rein to the DAR capability of handling various tasks. Therefore, for such an N-P hard problem, an integer programming optimization model and a hybrid particle swarm optimization (PSO) algorithm are proposed. In the optimization model, a full radar task structure is established, and a comprehensive objective function is formed to guarantee the performance in multiple aspects. In the hybrid PSO, a modified PSO is incorporated to explore good scheduling schemes, and a heuristic task interleaving algorithm, embedded in the PSO framework, for the efficient task schedulability analysis. Moreover, the chaotic sequences are adopted to improve the quality of initialized solution. The Shannon’s entropy is introduced to indicate the diversity of the population and adaptively tunes the parameters. Simulation results show that the proposed algorithm outperforms the three state-of-the-art scheduling algorithms while maintaining a reasonable runtime.
       
  • A SVR-ANN combined model based on ensemble EMD for rainfall prediction
    • Abstract: Publication date: Available online 27 September 2018Source: Applied Soft ComputingAuthor(s): Yu Xiang, Ling Gou, Lihua He, Shoulu Xia, Wenyong Wang Accurate and timely rainfall prediction is very important in hydrological modeling. Various prediction methods have been proposed in recent years. In this work, information regarding the short-to-long time variation inside original rainfall time series is explored using Ensemble Empirical Mode Decomposition (EEMD) based analysis on three rainfall datasets collected by meteorological stations located in Kunming, Lincang and Mengzi, Yunnan Province, China. Considering both with prediction accuracy and time efficiency, a novel combined model based on the information extracted with EEMD is then proposed in this paper. This model adopts various supervised learning methods for different components of input data, which employs Support Vector Machine (SVR) for short-period component prediction, while Artificial Neural Network (ANN) for long-period components prediction. Our research shows better performances than traditional methods that provides new thinking in rainfall prediction area.Graphical abstractGraphical abstract for this article
       
  • PSOSCALF: A new hybrid PSO based on sine cosine algorithm and Levy flight
           for solving optimization problems
    • Abstract: Publication date: Available online 27 September 2018Source: Applied Soft ComputingAuthor(s): Saeed Nezamivand Chegini, Ahmad Bagheri, Farid Najafi The development of the meta-heuristic algorithms for solving the optimization problems and constrained engineering problems is one of the topics of interest to researchers in recent years. Particle swarm optimization algorithm (PSO) is one of the social search-based and swarm intelligence algorithms that is distinguished by its high speed, low number of parameters and easy implementation. However, the PSO algorithm has disadvantages such as finding the local minimum instead of the global minimum and debility in global search capability. In this article, in order to solve these deficiencies, the PSO algorithm is combined with position updating equations in sine cosine algorithm (SCA) and the Levy flight approach. Therefore, a new hybrid method called PSOSCALF is introduced in this paper. In the SCA algorithm, the mathematical formulation for the solution updating is based on the behavior of sine and cosine functions. These functions guarantee the exploitation and exploration capabilities. Levy flight is a random walk that produces search steps using Levy distribution and then, with large jumps, more effective searches are occurred in the search space. Thus, using combination of the SCA and Levy flight in the PSOSCALF algorithm, the exploration capability of the original PSO algorithm is enhanced and also, being trapped in the local minimum is prevented. The performance and accuracy of the PSOSCALF method have been examined by 23 benchmark functions of the unimodal and multimodal type and 8 constrained real problems in engineering. The optimization results of the test functions show that the PSOSCALF method is more successful than the PSO family and other algorithms in determining global minimum of these functions. Also, the proposed PSOSCALF algorithm is successfully applied to the real constrained engineering problems and provides better solutions than other methods.
       
  • A convolutional neural network with feature fusion for real-time hand
           posture recognition
    • Abstract: Publication date: Available online 26 September 2018Source: Applied Soft ComputingAuthor(s): Sérgio F. Chevtchenko, Rafaella F. Vale, Valmir Macario, Filipe R. Cordeiro Gesture based human–computer interaction is both intuitive and versatile, with diverse applications such as in smart houses, operating theaters and vehicle infotainment systems. This paper presents a novel architecture, combining a convolutional neural network (CNN) and traditional feature extractors, capable of accurate and real-time hand posture recognition. The proposed architecture is evaluated on three distinct benchmark datasets and compared with the state-of-the art convolutional neural networks. Extensive experimentation is conducted using binary, grayscale and depth data, as well as two different validation techniques. The proposed feature fusion-based convolutional neural network (FFCNN) is shown to perform better across combinations of validation techniques and image representation. The recognition rate of FFCNN on binary images is equivalent to grayscale and depth when the aspect ratio of gestures is preserved. A real-time recognition system is presented with a demonstration video.
       
  • A hybrid algorithm of particle swarm optimization, metropolis criterion
           and RTS smoother for path planning of UAVs
    • Abstract: Publication date: Available online 25 September 2018Source: Applied Soft ComputingAuthor(s): Xiande Wu, Wenbin Bai, Yaen Xie, Xinzhu Sun, Chengchen Deng, Hongtao Cui Particle Swarm Optimization (PSO) algorithm is a simple approach with premature convergence and stagnation prone. The loss of efficiency and sub-optimal solution occur frequently while solving path planning problem with PSO. Therefore, a method is proposed to optimize parameters which affect performance of the PSO algorithm by using Rauch-Tung-Striebel (RTS) smoother. Moreover the Metropolis Criterion is applied as acceptance policy, which can prevent the PSO algorithm from falling into local minimums in the proposed method. The RTS smoother is applied to eliminate the irregular error of the PSO updated position, and to smooth the produced path. Experimental results show the proposed method which is based on the fusion of the PSO, Metropolis Criterion and RTS performs better than the existing methods in terms of solution’s quality and robustness in the path planning problem for UAVs.
       
  • A novel multiple-criteria decision-making-based FMEA model for risk
           assessment
    • Abstract: Publication date: Available online 21 September 2018Source: Applied Soft ComputingAuthor(s): Huai-Wei Lo, James J.H. Liou Failure mode and effect analysis (FMEA) is a forward-looking risk-management technique used in various industries for promoting the reliability and safety of products, processes, structures, systems, and services. However, FMEA has many defects in practical experimentation. Therefore, this paper proposes a new model that uses multiple-criteria decision-making in combination with grey theory for FMEA. This approach has several advantages, such as being able to add the expected cost into the original risk priority number (RPN) to reflect the actual resource limitations, consider the different weights of severity, occurrence, detectability, and cost based on the best–worst method in RPN element calculation, and use the grey interval linguistic variables to manage information uncertainty. Furthermore, this study applied probability-based grey relational analysis to calculate the RPN, which preserves the information of prioritized failure modes through interval analysis. To demonstrate the usefulness and effectiveness of the proposed model, real data from an international electronics company were applied. The proposed model can provide an alternative risk priority solution for product development.Graphical abstractGraphical abstract for this article
       
  • Sequential spectral clustering of hyperspectral remote sensing image over
           bipartite graph
    • Abstract: Publication date: Available online 20 September 2018Source: Applied Soft ComputingAuthor(s): Aidin Hassanzadeh, Arto Kaarna, Tuomo Kauranne Unsupervised classification is a crucial step in remote sensing hyperspectral image analysis where producing labeled data is a laborious task. Spectral Clustering is an appealing graph-partitioning technique with outstanding performance on data with non-linear dependencies. However, Spectral Clustering is restricted to small-scale data and neither has been effectively applied to hyperspectral image analysis. In this paper, the unsupervised classification of hyperspectral images is addressed through a sequential spectral clustering that can be extended to the large-scale hyperspectral image. To this end, this paper utilizes a bipartite graph representation along with a sequential singular value decomposition and mini-batch K-means for unsupervised classification of hyperspectral imagery. We evaluate the proposed algorithm with several benchmark hyperspectral datasets including Botswana, Salinas, Indian Pines, Pavia Center Scene and Pavia University Scene. The experimental results show significant improvements made by the proposed algorithm compared to the state-of-art clustering algorithms.
       
  • An improved multi-objective optimization approach for performance-based
           design of structures using nonlinear time–history analyses
    • Abstract: Publication date: Available online 20 September 2018Source: Applied Soft ComputingAuthor(s): Vahid Mokarram, Mohammad Reza Banan Performance-based design (PBD) of buildings can be properly addressed in a multi-objective optimization framework. However, computational costs of such an approach will be very expensive especially if nonlinear time–history analysis (NTHA) is used as the evaluation tool. In this paper, significant reductions in computational costs of solving structural multi-objective optimization problems is achieved by proposing a new metaheuristic surrogate model called Surrogate FC-MOPSO. In this method, which is an extension of FC-MOPSO algorithm, NTHA and pushover analysis (PA) are simultaneously employed for evaluating system responses. PAs are adopted as an approximation tool in the surrogate model while the responses corresponding to feasible solutions are always evaluated from NTHAs. The final Pareto optimal solutions, which yield tradeoffs between initial and life cycle costs (LCCs), are already evaluated based on NTHAs. It is shown that application of the proposed method results in substantial reductions of runtime of the considered problems. It is also demonstrated that adopting PAs as the only evaluation tool in optimal performance-based design of structures can result in unreliable solutions.
       
  • Application of a density based clustering technique on biomedical datasets
    • Abstract: Publication date: Available online 20 September 2018Source: Applied Soft ComputingAuthor(s): Md Anisur Rahman, Md Zahidul Islam The detection of the number of clusters in a biomedical dataset is very important for generating high quality clusters from the biomedical dataset. In this paper, we aim to evaluate the performance of a density based K-Means clustering technique called DenClust on biomedical datasets. DenClust produces the number of clusters and the high quality initial seeds from a dataset through a density based seed selection approach without requiring an user input on the number of clusters and the radius of the clusters. The high quality initial seeds for K-Means results in high quality clusters from a dataset. The performance of DenClust is compared with six other existing clustering techniques namely CRUDAW-F, CRUDAW-H, AGCUK, GAGR, K-Means, and K-Means++ on the twenty biomedical datasets in terms of two external cluster evaluation criteria namely Entropy and Purity and one internal cluster evaluation criteria called Sum of Squared Error (SSE). We also perform a statistical non-parametric sign test on the cluster evaluation results of the techniques. Both the cluster evaluation results and statistical non-parametric sign test results indicate the superiority of DenClust over the existing techniques on the biomedical datasets. The complexity of DenClust is O(n2) but the overall execution time of DenClust on the datasets is less than the execution time of AGCUK and GAGR having O(n) complexity.
       
  • A novel artificial bee colony algorithm for the workforce scheduling and
           balancing problem in sub-assembly lines with limited buffers
    • Abstract: Publication date: Available online 19 September 2018Source: Applied Soft ComputingAuthor(s): Alkin Yurtkuran, Betul Yagmahan, Erdal Emel In this study, a workforce scheduling and balancing problem is solved in unpaced sub-assembly lines with buffers feeding the paced body assembly line of a car manufacturer. The goal is to determine the minimum workforce required to process split lots at sub-assembly stations to feed the paced line over a periodic time window. Limited by a given buffer capacity at each station but with flexible start times for each split lot, an efficient workforce scheduling is possible to prevent shortages in downstream stations. Therefore, a stock-continuity equation has been proposed yielding the size of those split lots. Next, a single-objective Mixed Integer Programming (MIP) model is formulated for the problem as a combination of two implicitly weighted goals to minimise the workforce and the unbalanced workloads. The problem is a variant of workforce scheduling and routing problem with time windows and negligible walking distances. Due to the non-deterministic polynomial-time-hardness of the problem, we proposed an improved Artificial Bee Colony (ABC) algorithm named as discrete ABC with solution acceptance rule and multi-search (SAMSABC). The proposed algorithm is compared with different variants of ABC and other well-known metaheuristic algorithms such as Particle Swarm Optimisation and Differential Evolution on generated test cases. The computational results demonstrate the superiority of the proposed ABC algorithm and reveal that the SAMSABC can achieve accurate results within short computational times.
       
  • A fuzzy goal programming model in portfolio selection under
           competitive-cum-compensatory decision strategies
    • Abstract: Publication date: Available online 15 September 2018Source: Applied Soft ComputingAuthor(s): Mahuya De, B.K. Mangaraj, K.B. Das Many of the portfolio selection problems involve multiple numbers of conflicting and imprecise criteria in the decision environment. Multi-criteria decision-making approaches using fuzzy logic can tackle such problems when the imprecision is due to fuzziness. There are several fuzzy goal programming models for portfolio selection problems in the literature, either using “Min” or “additive” aggregation operator. While aggregation of decision criteria by “Min” operator stands for a “fully competitive decision strategy”, it represents for a “fully compensatory decision strategy” when we use an “additive operator”. In this paper, we propose a fuzzy goal programming approach using Werner’s “fuzzy and” hybrid operator, which is a combination of “Min”, as well as “arithmetic average” to generate efficient frontier. We obtain efficient solutions for different values of γ∈[0,1], the parameter of compensation for the criteria in the decision environment. We consider three criteria, viz. return, risk and liquidity for the purpose. While we use semi-absolute deviation for measuring risk, we utilize piece-wise linear functions to describe fuzzy return and fuzzy liquidity criteria. Taking stocks from three databases based on market capitalization, viz., NIFTY 50, NIFTY Smallcap 100 and NIFTY 500 from the NSE in India. This model demonstrates efficient portfolios for different strategies in a competitive-cum-compensatory decision environment.
       
  • Solving transcendental equation using artificial neural network
    • Abstract: Publication date: Available online 14 September 2018Source: Applied Soft ComputingAuthor(s): S.K. Jeswal, S. Chakraverty Transcendental equations play important role in solving various science and engineering problems. There exist many transcendental equations, which may not be solved by usual numerical methods. Accordingly, this paper gives a novel idea for solving transcendental equations using the concept of Artificial Neural Network (ANN). Multilayer Network architecture (viz. Four-layer network architecture) has been proposed for solving the transcendental equation. The detail network architecture with the procedures to solve single and system of transcendental equations have been discussed. The weights from input layer to the first hidden layer consist of the unknown variable and other weights in different layers are the known coefficients with respect to the given transcendental equation. After training by proposed steps and back propagation technique starting with a guess value(s) the unknown variable(s) tend to converge depending upon the accuracy thereby giving the solution of the equation Few standard example problems have been presented to validate the proposed method. Further, two examples have been included to show the applicability of the ANN method in comparison to the well-known numerical method. Moreover, an application problem of junction diode circuit has also been addressed.Graphical abstractGraphical abstract for this article
       
  • An ameliorated particle swarm optimizer for solving numerical optimization
           problems
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): Ke Chen, Fengyu Zhou, Yugang Wang, Lei Yin Although the particle swarm optimizer (PSO) has been widely used to address various complicated engineering problems, it is likely to suffer lack of diversity and ineffectiveness of balance between the global search ability and the local search ability in the search process. In this paper, we report an innovative and improved optimization method called ameliorated particle swarm optimizer (A-PSO), which is different from the original PSO algorithm and its variants in parameter update and the position generation of each particle. In A-PSO, the nonlinear dynamic acceleration coefficients, logistic map and a modified particle position update approach are introduced in PSO to improve the solution quality and accelerate the global convergence rate. Twenty well-known numerical optimization functions are adopted to evaluate the effectiveness of the proposed method and it is illustrated that, for most numerical optimization problems, the convergence performance and search accuracy of the A-PSO method are superior to the similar heuristic optimization algorithms and other well-known PSO variants. Namely, the proposed A-PSO technique has a faster convergence rate and is more stable than other PSO variants and similar population-based methods for almost all numerical optimization problems. Therefore, the A-PSO method is successfully used as a new optimization technique for solving numerical optimization problems.
       
  • A PCA-assisted hybrid algorithm combining EAs and adjoint methods for
           CFD-based optimization
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): Dimitrios Kapsoulis, Konstantinos Tsiakas, Xenofon Trompoukis, Varvara Asouti, Kyriakos Giannakoglou Hybrid optimization algorithms which combine Evolutionary Algorithms (EAs) and Gradient-Based (GB) methods o refine the most promising solutions, are valuable tools for use in engineering optimization. Several hybrid methods can be found in the literature; differences among them are associated with the criteria used to select individuals for refinement through the GB method and the feedback the EA gets from the latter. GB methods require the gradient of the objective functions with respect to the design variables. By employing the adjoint method in problems governed by partial differential equations, the cost of computing the gradient becomes independent of the number of design variables. For multi-objective optimization problems this paper is exclusively dealing with, the availability of the gradients of all objective functions is not enough. Hybrid algorithms require the computation of descent directions in the objective space capable of improving the current front of non-dominated solutions. Using the sum of weighted objectives as the new objective function is ineffective. In this paper, a method which refrains from using arbitrarily defined weights is proposed. The method is driven by data obtained from the Principal Component Analysis (PCA) of the objective function values of the elite individuals at each generation of the EA. The PCA, with computational cost that of the solution of an eigenproblem, identifies the direction in the objective space along which the current front of non-dominated solutions should be improved. This along with the gradients computed by the adjoint method are used by the GB method to refine selected individuals. The efficiency of the proposed hybrid algorithm is further improved by employing online trained surrogate models or metamodels and Kernel PCA within the EA-based search. The proposed method is demonstrated in aerodynamic shape optimization problems, using in-house Computational Fluid Dynamics software and its adjoint.
       
  • Novel roughness measurement for grinding surfaces using simulated data by
           transfer kernel learning
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): Hang Zhang, Jian Liu, Shengfeng Chen, Weifang Wang In conventional visual roughness measurement methods, constructing a relationship between an image feature index and surface roughness requires a large number of samples with a wide range of known roughness at uniform intervals as input for training or fitting. Considering these challenges, this paper has proposed a simulated data and transfer kernel learning-based visual roughness measurement method. In the proposed method, a virtual sample with specified roughness is first created via non-Gaussian surface digital simulation and three-dimensional entity modeling technology. After that step, a surface image of the virtual and processed samples is generated through image simulation and actual imaging experiments. Next, the image feature index distribution discrepancy between the simulation and actual domains is adapted by transfer kernel learning. A regression model is trained based on the simulated samples with known roughness, and is later generalized to the actual domain via a cross-domain kernel matrix to predict the roughness of the processed samples. To transfer the similar red and green mixing effects between the actual and simulation domains, a relative mixing degree index and a mixing region area index are designed based on the color information. By comparing these two indexes with the image pixel color difference index and image sharpness index, the feasibility and effectiveness of the proposed method are validated. The experiment results show that the proposed method can achieves an accuracy of over 90% based on the simulated data and transfer kernel learning. The proposed method provides a new improvement strategy for visual roughness measurement.
       
  • Economic dispatch of power systems using an adaptive charged system search
           algorithm
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): P. Zakian, A. Kaveh In this article, an adaptive charged system search (ACSS) algorithm is developed for the solution of the economic dispatch problems. The proposed ACSS is based on the charged system search (CSS) which is a meta-heuristic algorithm utilizing the governing Coulomb law from electrostatics and the Newtonian laws of mechanics. Here, two effective strategies are considered to present the new ACSS. The first one is an improved initialization based on opposite based learning and subspacing techniques. The second one is Levy flight random walk for enriching updating process of the algorithm. Many types of economic dispatch cases comprising 6, 13, 15, 40, 160 and 640 units generation systems are testified as benchmarks ranging from small to large scale problems. These problems entail different constraints consisting of power balance, ramp rate limits, prohibited operating zones and valve point load effects. Additionally, multiple fuel options and transmission losses are included for some test cases. Moreover, simple constraint handling functions are developed in terms of penalty approach which can readily be incorporated into any other meta-heuristic algorithm. Results indicate that the ACSS either outperform or perform well in comparison to the CSS and other optimizers in finding optimized fuel costs.
       
  • Integrating the whale algorithm with Tabu search for quadratic assignment
           problem: A new approach for locating hospital departments
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): Mohamed Abdel-Baset, M. Gunsekaran, Doaa El-Shahat, Seyedali Mirjalili The Quadratic Assignment Problem (QAP) is a combinatorial NP-hard optimization problem that is not solvable in a polynomial time. It has a large number of real-world applications in diverse fields (e.g. facility arrangement in a hospital). The Whale Optimization Algorithm is a new meta-heuristic that achieves a great success in solving the continuous problems. In this paper, we propose a memetic algorithm using the Whale optimization Algorithm (WA) Integrated with a Tabu Search (WAITS) for solving QAP. In fact, this work employs Tabu Search to improve the quality of solution obtained by WA for QAP problem as a local search algorithm. This is an attempt to improve the convergence speed and local search of WA as its main drawbacks. Due to the combinatorial nature of QAP, the continuous values generated from the standard WA were converted to discrete values by the largest real value mapping. The WAITS algorithm is enhanced by a local search that defines a set of neighborhood solutions to improve the accuracy of the obtained solutions. Fourteen different case studies including 122 test problems are employed for analyzing the performance of the proposed WAITS. The results show that the proposed memetic algorithm finds near-optimal solutions with an acceptable computational time. WAITS is compared to several algorithms in the literature. The results show that the proposed algorithm outperforms similar algorithms in the literature.
       
  • A multi-objective evolutionary algorithm based on decomposition and
           constraint programming for the multi-objective team orienteering problem
           with time windows
    • Abstract: Publication date: Available online 9 September 2018Source: Applied Soft ComputingAuthor(s): Wanzhe Hu, Mahdi Fathi, Panos M. Pardalos The team orienteering problem with time windows (TOPTW) is a well-known variant of the orienteering problem (OP) originated from the sports game of orienteering. Since the TOPTW has many applications in the real world such as disaster relief routing and home fuel delivery, it has been studied extensively. In the classical TOPTW, only one profit is associated with each checkpoint while in many practical applications each checkpoint can be evaluated from different aspects, which results in multiple profits. In this study, the multi-objective team orienteering problem with time windows (MOTOPTW), where checkpoints with multiple profits are considered, is introduced to find the set of Pareto optimal solutions to support decision making. Moreover, a multi-objective evolutionary algorithm based on decomposition and constraint programming (CPMOEA/D) is developed to solve the MOTOPTW. The advantages of decomposition approaches to handle multi-objective optimization problems and those of the constraint programming to deal with combinatorial optimization problems have been integrated in CPMOEA/D. Finally, the proposed algorithm is applied to solve public benchmark instances. The results are compared with the best-known solutions from the literature and show more improvement.
       
  • A robust power system stabilizer for enhancement of stability in power
           system using adaptive fuzzy sliding mode control
    • Abstract: Publication date: Available online 9 September 2018Source: Applied Soft ComputingAuthor(s): Prakash K. Ray, Shiba R. Paital, Asit Mohanty, Foo Y.S. Eddy, Hoay Beng Gooi This paper presents design of power system stabilizer (PSS) based on conventional fuzzy-PID and type-1 fuzzy controller for stability improvements in single as well as multi machine power system connected to infinite bus under different loading conditions. Again, fuzzy and integral sliding mode controllers (FSMC and I-SMC) are being incorporated with PSS into the power system to improve the stability performance. But, the presence of chattering in this controller may lead to low frequency oscillations of smaller magnitudes that can sustain to disturb the power transfer capability and the stability of the system. Therefore, to enhance the performance and efficiency of the power system, a novel adaptive fuzzy sliding mode controller with a robust sliding surface is designed to overcome the possible chattering due to system uncertainties and dynamics. In the proposed adaptive fuzzy sliding mode controller (AFSMC), the stability is ensured through Lyapunov analysis and synthesis test. In addition to the graphical simulation analysis, a quantitative stability approach and real-time test using OPAL-RT OP5600 is also carried out in order to augment the stability study. Further, stability test using eigen modes, root locus and Bode plots are presented to assess the stability performance of the proposed controller. Both the qualitative and quantitative analysis ensures better and robust performance of proposed controllers in comparison to the conventional fuzzy-PID and type-1 fuzzy controller.
       
  • Hamiltonian path based image steganography scheme with improved
           imperceptibility and undetectability
    • Abstract: Publication date: Available online 8 September 2018Source: Applied Soft ComputingAuthor(s): Gyan Singh Yadav, Aparajita Ojha Steganography is the art of concealing secret information in some cover medium (picture, text etc.) which is being used at large scale in different security systems. Various schemes of data hiding in images have been proposed over the last two decades that focus on improving the visual quality of the cover image. But there is always a trade-off between imperceptibility and payload. Another important issue is data security. It is known that histograms can reveal the existence and quantum of data hidden in a stego image. So, the main challenge of high capacity data embedding schemes is to reduce the image distortion as well as to keep the data secure by making use of alternate mechanisms such as secret keys. In this paper, we propose a novel data hiding scheme with two-fold objective. Data security is ensured by making use of a Hamiltonian path to generate a secret key for accessing data embedding patterns. Due to the complexity of Hamiltonian path generation, the embedding patterns become almost impossible to guess, even if the presence of secret data is observed using steganographic attacks. Secondly, two optimization strategies are proposed to minimize the embedding effect on the stego image. While the first strategy focuses on reducing the bit-flipping cost, the other strategy improves histogram distortion. The proposed method is shown to outperform some of the existing and recent approaches both in terms of payload and imperceptibility.
       
  • Self-evolving type-2 fuzzy brain emotional learning control design for
           chaotic systems using PSO
    • Abstract: Publication date: Available online 8 September 2018Source: Applied Soft ComputingAuthor(s): Tien-Loc Le, Chih-Min Lin, Tuan-Tu Huynh This work presents a design of interval type-2 fuzzy brain emotional learning control (T2FBELC) combining with the self-evolving algorithm to help the network to automatically achieve the optimum construction from the empty initial rule. In the control system design, the T2FBELC is the main controller used to mimic an idea controller, and a robust controller is a compensator for the compensation of the residual error. Implementing the steepest descent gradient approach, the parameter adaptive laws of the proposed system are derived. Besides, the particle swarm optimization (PSO) is applied to find the optimal learning rates for the parameter adaptive laws. The stability of the proposed algorithm is guaranteed using the Lyapunov function. Finally, the effectiveness of the proposed control system is verified by numerical simulations of the chaotic systems.
       
  • DC+power+systems&rft.title=Applied+Soft+Computing&rft.issn=1568-4946&rft.date=&rft.volume=">Modified differential evolution approach for practical optimal reactive
           power dispatch of hybrid AC − DC power systems
    • Abstract: Publication date: Available online 8 September 2018Source: Applied Soft ComputingAuthor(s): Samir Sayah Optimal reactive power dispatch (ORPD) is one of the most important functions of modern energy management systems, used to ensure stable and secure operating conditions of power networks. Currently, High voltage direct current (HVDC) transmission systems are widely used in modern electrical power systems due to their economic and technical advantages. The incorporation of DC link introduces more complexity in ORPD computation. In this paper, an efficient enhanced differential evolution algorithm with new mutation strategy is proposed to solve the ORPD problem incorporating a two terminal HVDC transmission link. To the best of the author’s knowledge, no one has investigated the performance of DE on ORPD task.The performance of the proposed strategy has been investigated on two hybrid AC−DC systems, which are the modified New England 39−bus test system, and the practical and large-scale Algerian 114−bus electric network. To illustrate the effectiveness and potential of the proposed approach, the achieved results were evaluated and compared to those reported recently in literature.
       
  • An adaptive fuzzy predictive control of nonlinear processes based on
           Multi-Kernel least squares support vector regression
    • Abstract: Publication date: Available online 7 September 2018Source: Applied Soft ComputingAuthor(s): I. Boulkaibet, K. Belarbi, S. Bououden, M. Chadli, T. Marwala In this paper, an adaptive fuzzy Generalized Predictive Control (GPC) is proposed for discrete-time nonlinear systems via Takagi-Sugeno system based Multi-Kernel Least Squares Support Vector Regression (TS-LSSVR). The proposed adaptive TS-LSSVR strategy is constructed using a multi-kernel least squares support vector regression where the learning procedure of the proposed TS-LSSVR is achieved in three steps: In the first step, which is an offline step, the antecedent parameters of the TS-LSSVR are initialized using a fuzzy c-means clustering algorithm. The second step, which is an online step, deals with the adaptation of the antecedent parameters which can be implemented using a back-propagation algorithm. Finally, the last online step is to use the Fixed-Budget Kernel Recursive Least Squares algorithm to obtain the consequent parameters. Furthermore, an adaptive generalized predictive control for nonlinear systems is introduced by integrating the proposed adaptive TS-LSSVR into the generalized predictive controller (GPC). The reliability of the proposed adaptive TS-LSSVR GPC controller is investigated by controlling two nonlinear systems: A surge tank and continuous stirred tank reactor (CSTR) systems. The proposed TS-LSSVR GPC controller has demonstrated good results and efficiently controlled the nonlinear plants. Furthermore, the adaptive TS-LSSVR GPC has the ability to deal with disturbances and variations in the nonlinear systems.
       
  • Automatically discovering clusters of algorithm and problem instance
           behaviors as well as their causes from experimental data, algorithm
           setups, and instance features
    • Abstract: Publication date: Available online 5 September 2018Source: Applied Soft ComputingAuthor(s): Thomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, Ke Tang In the fields of heuristic optimization and machine learning, experimentation is the way to assess the performance of an algorithm setup and the hardness of problems. Most algorithms in the domain are anytime algorithms, meaning that they can improve their approximation quality over time. This means that one algorithm may initially perform better than another one, but converge to worse solutions in the end. Instead of single final results, the whole runtime behavior of algorithms needs to be compared. Moreover, a researcher does not just want to know which algorithm performs best and which problem is the hardest – she/he wants to know why. In this paper, we introduce a process which can 1) automatically model the progress of algorithm setups on different problem instances based on data collected in experiments, 2) use these models to discover clusters of algorithm (or problem instance) behaviors, and 3) propose causes why a certain algorithm setup (or problem instance) belongs to a certain algorithm (or problem instance) behavior cluster. These high-level conclusions are presented in form of decision trees relating algorithm parameters (or instance features) to cluster ids. We emphasize the duality of analyzing algorithm setups and problem instances. Our process is implemented as open source software and tested in two case studies, on the Maximum Satisfiability Problem and the Traveling Salesman Problem. Besides its basic application to raw experimental data, yielding clusters and explanations of “quantitative” algorithm behavior, our process also allows for “qualitative” conclusions by feeding it with data which is normalized based on problem features or algorithm parameters. It can also be applied recursively, e.g., to further investigate the behavior of the algorithms in the cluster with the best-performing setups on the problem instances belonging to the cluster of hardest instances. Both use cases are investigated in the case studies. We conclude our article by a comprehensive analysis of the drawbacks of our method and with suggestions on how it can be improved.
       
  • Improved optimal harmonic reduction method in PWM AC–AC converter using
           modified Biogeography-Based Optimization Algorithm
    • Abstract: Publication date: Available online 5 September 2018Source: Applied Soft ComputingAuthor(s): Mohammad Esmaeil Hassanzadeh, Saeed Hasanvand Pulse width modulation (PWM) AC-AC converter is a power electronic device which has many applications, because it can control the load voltage and active power. This converter cuts the voltage wave form, so the output voltage will have harmonics. Since the fundamental component of output voltage influences the active power, assigning weighting factor to the optimization problem of harmonic reduction in the converter has been proposed in this paper. To solve the optimization problem a hybrid algorithm including biogeography-based optimization BBO and simulated annealing (SA) has been presented. This algorithm has the advantages of the two method simultaneously. Since many evolutionary optimization algorithms may not reach the global optimum solution, in order to develop the search process, a local search algorithm in an inner loop i.e. SA is applied to the obtained solution by main search algorithm i.e. BBO In the other word, the second level optimization by SA is applied to the best answer of BBO in each iteration, to find the global extreme point. The proposed algorithm has been applied to some standard benchmark functions. After validation using these problems, the main problem which is a practical problem to minimize the harmonics for PWM AC-AC converter has been solved. Simulation results show the efficiency of proposed method by considering weighting factor to optimize a practical problem in power engineering area in order to harmonic reduction as well as improving fundamental voltage as an important issue.
       
  • A decomposition-based multi-objective optimization approach considering
           multiple preferences with robust performance
    • Abstract: Publication date: Available online 5 September 2018Source: Applied Soft ComputingAuthor(s): Xinqi Zhu, Zhenghong Gao, Yiming Du, Shixin Cheng, Fang Xu In this paper, we propose a decomposition-based multi-objective optimization approach considering multiple preferences, expressed by means of reference points, and with robust performance (mprMOEA/D). This algorithm is able to find multiple preferred regions in a single run, and its performance is robust with respect to different problems. The proposed algorithm utilizes a subpopulation (SP) for each reference point to search for the corresponding preferred region. An external population (EP) is maintained to selectively preserve solutions from all the SPs, and it can be revisited when producing new solution for each SP. The proposed collaboration mechanism between the SPs and EP is helpful in convergence and diversity preserving. In order to obtain robust performance, local crossover coordinate systems, which coincide with the local manifold of the Pareto set, are introduced into mprMOEA/D for the crossover operator of differential evolution, alleviating the influence of the overall Pareto set shape. The effects of these adopted techniques on the proposed algorithm are discussed, and the robust performance of the proposed approach is validated using numerical functions in comparison with four existing approaches. Experimental results show that the proposed algorithm outperforms the other algorithms.
       
  • A dynamic multiple populations particle swarm optimization algorithm based
           on decomposition and prediction
    • Abstract: Publication date: Available online 3 September 2018Source: Applied Soft ComputingAuthor(s): Ruochen Liu, Jianxia Li, Jing Fan, Licheng Jiao Dynamic multi-objective optimization problems (DMOPs) involve multiple optimization objectives which are in conflict with one another and change over time or environment. A novel dynamic multiple population particle swarm optimization algorithm based on decomposition and prediction (denoted as DP-DMPPSO) is proposed to solve DMOPs. Each objective is optimized by one population and each population shares their information with other populations. The populations evolve independently using a modified particle swarm optimization (PSO). An external archive is adopted to store the non-dominated solutions selected from all populations in the evolutionary process and the archive will be output as the final solution. A mechanism for updating the archive based on the objective space decomposition (DOS) is proposed. In addition, a population prediction mechanism is employed to accelerate the convergence to the true Pareto front. DP-DMPPSO is tested on a set of benchmark problems and compared with several state-of-the-art algorithms. The results show DP-DMPPSO is highly competitive for solving dynamic multi-objective optimization problems.
       
  • A hybrid hierarchical fault diagnosis method under the condition of
           incomplete decision information system
    • Abstract: Publication date: Available online 3 September 2018Source: Applied Soft ComputingAuthor(s): Yawei Ge, Mingqing Xiao, Zhao Yang, Lei Zhang, Yajun Liang Processing attribute reduction plays a key role in the fault diagnosis of incomplete decision information system (IDIS), and it improves the efficiency and accuracy of fault diagnosis. Tolerance relation-based attribute reduction is widely used in the IDIS. However, the fuzziness of relation-based classification always exists in the practical attribute reduction problems of fault diagnosis as the incompleteness and uncertainty of data information, and the traditional tolerance relation-based attribute reduction methods are not suitable for fault diagnosis of IDIS. Therefore, this paper proposes a hybrid hierarchical fault diagnosis method with the combination of tolerance relation-based attribute reduction method and integrated logarithmic fuzzy preference programming (LFPP) based methodology. The method utilizes both qualitative and quantitative data information and constructs the hierarchical structure of fault diagnosis in IDIS. The integrated LFPP based methodology obtains the unique normalized optimal significance priorities vector for attribute fuzzy pairwise comparison matrices simultaneously and directly as the sorting part of proposed method. The tolerance relation-based attribute reduction method decomposes the fault attributes reduction problem into multiple sub-problems, which is the decomposing part of proposed method. Hence, the proposed hybrid method can handle the fuzziness of relation-based classification and mitigate complexity attribute reduction for fault diagnosis of IDIS. Finally, an engineering case for strategy reduction of fault diagnosis is provided to demonstrate the feasibility of the proposed method and obtain the reduction diagnosis strategies. Another test case is given for verifying the validity of the reduction results and for comparison between the proposed method and other different methods, which shows that the method is indeed efficient and has greater advantages at producing higher accuracy, reducing difficulty and mitigating complexity in fault diagnosis.Graphical abstractGraphical abstract for this article
       
  • An innovative deep architecture for aircraft hard landing prediction based
           on time-series sensor data
    • Abstract: Publication date: Available online 3 September 2018Source: Applied Soft ComputingAuthor(s): Chao Tong, Xiang Yin, Jun Li, Tongyu Zhu, Renli Lv, Liang Sun, Joel J.P.C. Rodrigues This paper proposes an innovative deep architecture for aircraft hard landing prediction based on Quick Access Record (QAR) data. In the field of industrial IoT, the IoT devices collect IoT data and send these data to the open IoT cloud platform to process and analyze. The prediction of aircraft hard landing is one kind of typical IoT application in aviation field. Firstly, 15 most relevant landing sensor data have been chosen from 260 parameters according to the theory of both aeronautics and feature engineering. Secondly, a deep prediction model based on Long Short-Term Memory (LSTM) have been developed to predict hard landing incidents using the above-mentioned selected sensor data. And then, we adjust the model structure and conduct contrastive experiments. Finally, we use Mean Square Error (MSE) as the evaluation criteria to select the most optimal model. Experimental results prove its better performance with higher prediction accuracy on QAR datasets compared with the state-of-the-art, indicating that this model is effective and accurate for hard landing prediction, which helps to guarantee passengers’ safety and reduce the incidence of landing accidents. Besides, the proposed work is conducive to making an innovation for building and developing the industrial IoT systems in aviation field.
       
  • Exploring the probabilistic graphic model of a hybrid multi-objective
           Bayesian estimation of distribution algorithm
    • Abstract: Publication date: Available online 1 September 2018Source: Applied Soft ComputingAuthor(s): Marcella S.R. Martins, Myriam Delgado, Ricardo Lüders, Roberto Santana, Richard A. Gonçalves, Carolina P. de Almeida The Hybrid Multi-objective Bayesian Estimation of Distribution Algorithm (HMOBEDA) has shown to be very competitive for Many Objective Optimization Problems (MaOPs). The Probabilistic Graphic Model (PGM) of HMOBEDA expands the possibilities for exploration as it provides the joint probability of decision variables, objectives, and configuration parameters of an embedded local search. This work investigates different sampling mechanisms of HMOBEDA, applying the considered approaches to two different combinatorial MaOPs. Moreover, the paper provides a broad set of statistical analyses on its PGM model. These analyses have been carried out to evaluate how the interactions among variables, objectives and local search parameters are captured by the model and how information collected from different runs can be aggregated and explored in a Probabilistic Pareto Front. In experiments, two variants of HMOBEDA are compared with the original version, each one with a different set of evidences fixed during the sampling process. Results for instances of multi-objective knapsack problem with 2 to 5 and 8 objectives show that the best variant outperforms the original HMOBEDA in terms of convergence and diversity in the solution set. This best variant is then compared with five state-of-the-art evolutionary algorithms using the knapsack problem instances as well as a set of MNK-landscape instances with 2, 3, 5 and 8 objectives. HMOBEDA outperforms all of them.Graphical abstractGraphical abstract for this article
       
  • Optimized tool path planning for five-axis flank milling of ruled surfaces
           using geometric decomposition strategy and multi-population harmony search
           algorithm
    • Abstract: Publication date: Available online 1 September 2018Source: Applied Soft ComputingAuthor(s): Jin Yi, Chih-Hsing Chu, Chi-Lung Kuo, Xinyu Li, Liang Gao Tool path planning is a key to ensure high machining quality and productivity in 5-axis flank milling of ruled surfaces. Previous studies have shown that optimization-driven tool path planning can effectively reduce the geometrical errors on the finished surface. However, to solve the corresponding optimization problem is a challenging task involving a large number of decision variables. This paper proposes a novel approach to generating optimized tool path for 5-axis flank finishing cut based on a geometric decomposition strategy and multi-population harmony search algorithm. The proposed approach geometrically divides the surface to be machined into a number of segments. The tool paths on those sub-surfaces are independently optimized by the multi-population harmony search algorithm. Individual tool paths are then combined together to form a complete one. The test results of representative surfaces show that the proposed approach produces higher machining precision with less computational time than compared previous methods. And the computational time is further reduced by Message passing interface based parallel computing techniques. A detailed analysis is conducted to characterize how the number of divisions affects the optimization results. And the proposed approach also shows good scalability with the increasing number of cutter locations.Graphical abstractGraphical abstract for this article
       
  • Oscillatory Particle Swarm Optimizer
    • Abstract: Publication date: Available online 1 September 2018Source: Applied Soft ComputingAuthor(s): Haiyan Shi, Shilong Liu, Hongkun Wu, Ruowei Li, Sanchi Liu, Ngaiming Kwok, Yeping Peng The Particle Swarm Optimization (PSO) algorithm is an attractive meta-heuristic approach for difficult optimization problems. It is able to produce satisfactory results when classical analytic methods cannot be applied. However, the design of PSO was usually based on ad-hoc attempts and its behavior could not be exactly specified. In this work, we propose to drive particle into oscillatory trajectories such that the search space can be covered more completely. A difference equation based analysis is conducted to reveal conditions that guarantee trajectory oscillation and solution convergence. The settings of cognitive and social learning factors and the inertia weight are then determined. In addition, a new strategy in directing these parameters to follow a linearly decreasing profile with a perturbation is formulated. Experiments on function optimizations are conducted and compared to currently available methods. Results have confirmed that the proposed Oscillatory Particle Swarm Optimizer (OSC-PSO) outperforms other recent PSO algorithms using adaptive inertia weights.
       
  • Minimizing the resource consumption of heterogeneous batch-processing
           machines using a copula-based estimation of distribution algorithm
    • Abstract: Publication date: Available online 1 September 2018Source: Applied Soft ComputingAuthor(s): Chuang Liu, Huaping Chen, Rui Xu, Yu Wang The two-stage flow-shop scheduling problem investigated in this work aims minimize the resource consumption of non-identical job sizes. The flow shop consists of two batch-processing machines (BPMs): a parallel batch BPM and a sequential BPM. The makespan and resource consumption are considered together in this study, the makespan is the constraint condition, and the resource consumption is the objective. A copula-based Estimation of Distribution Algorithm (cEDA) is used to solve the problem. In this study, the individuals are coded by the allocated resource sequences of all jobs in two machines, and the convex resource consumption function is adopted to simulate the relationship between the processing time of the jobs and the resources allocated to the jobs. A Gaussian distribution is adopted as the marginal probabilistic distribution of all the components. The proposed copula function C1 assumes independence among the components, whereas the Clayton copula function C2 assumes that all components are interrelated and introduced for comparison. The computational experiments and comparisons verify the effectiveness of the proposed cEDA. In addition, the copula functions C1 andC2 adopted in the proposed cEDA approach are compared.
       
  • A formal approach for matching and ranking trustworthy context-dependent
           services
    • Abstract: Publication date: Available online 30 August 2018Source: Applied Soft ComputingAuthor(s): Afnan Bawazir, Wadee Alhalabi, Mubarak Mohamed, Akila Sarirete, Ammar Alsaig Service Oriented Computing (SOC) is being utilized and relied on by the enterprise application development and distributed computing. The three fundamental interacting components in SOC are service providers, service requesters, and registries. One of the challenges that face the requester when choosing a service is to find the trustworthy service which satisfies and serves the requester and provider contexts. This is due to the increasing number of services in registries. Thus, there is a need for a ranking approach which takes into account both the rich features and the context of service requester and provider, in order to improve the applicability of a top-ranking result. By including the trustworthiness requirements in the ranking process for the purpose of providing a reliable service with respect to the requester’s preferences, we put forward a generic structure and framework for matching and ranking trustworthy context-dependent services. We based our framework on the logic and set theory, and defined a formal description of the mentioned services. Consequently, we assessed our suggested architecture using a real-world case study. The study presents the effectiveness of the suggested ranking model architecture for finding the most suitable trustworthy service according to user preferences while considering the context information. The proposed framework can be utilized in several application domains.Graphical abstractGraphical abstract for this article
       
  • Speckle de-noising of clinical ultrasound images based on fuzzy spel
           conformity in its adjacency
    • Abstract: Publication date: Available online 29 August 2018Source: Applied Soft ComputingAuthor(s): Rahul Roy, Susmita Ghosh, Ashish Ghosh This article presents a novel speckle de-noising technique leveraging the properties of fuzzy set theory for preserving context of the echo-texture and dealing with vagueness and uncertainty in ultrasound images. This work is inspired from the theory of Michailovich and Tannenbaum (2006) where it was shown that the log transformation of clinical images converts the speckle noise to white Gaussian noise (WGN) with outliers. In this context, we propose to apply a non-linear transformation (using functions with horizontal asymptotes) to speckle noise which would map the outlier to the asymptote of the function. This would result in suppression of the outliers and thereby reducing the problem to cancellation of WGN. We adhere to fuzzy set theory for defining such transformations as they can be well characterized with vague linguistic terms. In this work, a property scene is generated using a membership function which has the ability to suppress the outliers. Thereafter, the property scene is updated such that the membership of belonging of the spels in a neighborhood is in agreement with each other to the defined property. This, in turn, will restore the homogeneity in the neighborhood of the image when the property scene is de-fuzzified back to image space. To measure the compliance of a neighborhood spel, a concept of degree of conformity is introduced. This degree of conformity determines the contribution of the neighboring spel in the update process of the property scene. To elicit the effectiveness of the proposed technique, comparative analysis has been done with five state-of-the-art techniques on in silico images using five standard measures and in vivo ultrasound images with two performance indices. Moreover, to show the impact of the de-noising algorithm on performance, segmentation based investigation is carried out on 102 ultrasound images of carotid artery. A robustness analysis with increasing level of noise is also investigated. The performance of segmentation is reported in a box plot of precision and recall. Qualitative and quantitative analysis reveal the promising performance of the proposed technique.Graphical abstractGraphical abstract for this article
       
  • Backtracking search integrated with sequential quadratic programing for
           nonlinear active noise control systems
    • Abstract: Publication date: Available online 28 August 2018Source: Applied Soft ComputingAuthor(s): Wasim Ullah Khan, ZhongFu Ye, Naveed Ishtiaq Chaudhary, Muhammad Asif Zahoor Raja In the present work, integrated strength of backtracking search algorithm (BSA) and sequential quadratic programming (SQP) is exploited for nonlinear active noise control (ANC) systems. Legacy of approximation theory in mean squared sense is utilized to construct a cost function for ANC system based on finite impulse response (FIR) and Volterra filtering procedures. Global search efficacy of BSA aided with rapid local refinements with SQP is practiced for effective optimization of fitness function for ANC systems having sinusoidal, random and complex random signals under several variants based on linear/nonlinear and primary/secondary paths. Statistical observations demonstrated the worth of stochastic solvers BSA and BSA-SQP by means of accuracy, convergence and complexity indices.
       
  • A new method for identification of fuzzy models with controllability
           constraints
    • Abstract: Publication date: Available online 28 August 2018Source: Applied Soft ComputingAuthor(s): Leonel Gutiérrez, Diego Muñoz-Carpintero, Felipe Valencia, Doris Sáez Takagi-Sugeno fuzzy models are cataloged as universal approximators and have been proven to be a powerful tool for the prediction of systems. However, in certain cases they may fail to inherit the main properties of a system which may cause problems for control design. In particular, a non-suitable model can generate a loss of closed-loop performance or stability, especially if that model is not controllable. Therefore, ensuring the controllability of a model to enable the computation of appropriate control laws to bring the system to the desired operating conditions. Therefore, a new method for identification of fuzzy models with controllability constraints is proposed in this paper. The method is based on the inclusion of a penalty component in the objective function used for consequence parameter estimation, which allows one to impose controllability constraints on the linearized models at each point of the training data. The benefits of the proposed scheme are shown by a simulation-based study of a benchmark system and a continuous stirred tank: the stability and the closed-loop performances of predictive controllers using the models obtained with the proposed method are better than those using models found by classical and local fuzzy identification schemes.
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.167.18.170
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-