for Journals by Title or ISSN
for Articles by Keywords
  Subjects -> COMPUTER SCIENCE (Total: 2122 journals)
    - ANIMATION AND SIMULATION (31 journals)
    - ARTIFICIAL INTELLIGENCE (105 journals)
    - AUTOMATION AND ROBOTICS (105 journals)
    - COMPUTER ARCHITECTURE (10 journals)
    - COMPUTER ENGINEERING (11 journals)
    - COMPUTER GAMES (21 journals)
    - COMPUTER PROGRAMMING (26 journals)
    - COMPUTER SCIENCE (1231 journals)
    - COMPUTER SECURITY (50 journals)
    - DATA BASE MANAGEMENT (14 journals)
    - DATA MINING (38 journals)
    - E-BUSINESS (22 journals)
    - E-LEARNING (30 journals)
    - IMAGE AND VIDEO PROCESSING (40 journals)
    - INFORMATION SYSTEMS (107 journals)
    - INTERNET (96 journals)
    - SOCIAL WEB (53 journals)
    - SOFTWARE (34 journals)
    - THEORY OF COMPUTING (9 journals)

COMPUTER SCIENCE (1231 journals)                  1 2 3 4 5 6 7 | Last

Showing 1 - 200 of 872 Journals sorted alphabetically
3D Printing and Additive Manufacturing     Full-text available via subscription   (Followers: 24)
Abakós     Open Access   (Followers: 4)
ACM Computing Surveys     Hybrid Journal   (Followers: 31)
ACM Journal on Computing and Cultural Heritage     Hybrid Journal   (Followers: 8)
ACM Journal on Emerging Technologies in Computing Systems     Hybrid Journal   (Followers: 17)
ACM Transactions on Accessible Computing (TACCESS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 15)
ACM Transactions on Applied Perception (TAP)     Hybrid Journal   (Followers: 5)
ACM Transactions on Architecture and Code Optimization (TACO)     Hybrid Journal   (Followers: 9)
ACM Transactions on Autonomous and Adaptive Systems (TAAS)     Hybrid Journal   (Followers: 9)
ACM Transactions on Computation Theory (TOCT)     Hybrid Journal   (Followers: 12)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 3)
ACM Transactions on Computer Systems (TOCS)     Hybrid Journal   (Followers: 18)
ACM Transactions on Computer-Human Interaction     Hybrid Journal   (Followers: 16)
ACM Transactions on Computing Education (TOCE)     Hybrid Journal   (Followers: 7)
ACM Transactions on Design Automation of Electronic Systems (TODAES)     Hybrid Journal   (Followers: 6)
ACM Transactions on Economics and Computation     Hybrid Journal   (Followers: 2)
ACM Transactions on Embedded Computing Systems (TECS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Information Systems (TOIS)     Hybrid Journal   (Followers: 20)
ACM Transactions on Intelligent Systems and Technology (TIST)     Hybrid Journal   (Followers: 8)
ACM Transactions on Interactive Intelligent Systems (TiiS)     Hybrid Journal   (Followers: 5)
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Reconfigurable Technology and Systems (TRETS)     Hybrid Journal   (Followers: 6)
ACM Transactions on Sensor Networks (TOSN)     Hybrid Journal   (Followers: 8)
ACM Transactions on Speech and Language Processing (TSLP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Storage     Hybrid Journal  
ACS Applied Materials & Interfaces     Hybrid Journal   (Followers: 35)
Acta Automatica Sinica     Full-text available via subscription   (Followers: 2)
Acta Informatica Malaysia     Open Access  
Acta Universitatis Cibiniensis. Technical Series     Open Access  
Ad Hoc Networks     Hybrid Journal   (Followers: 11)
Adaptive Behavior     Hybrid Journal   (Followers: 10)
Advanced Engineering Materials     Hybrid Journal   (Followers: 29)
Advanced Science Letters     Full-text available via subscription   (Followers: 11)
Advances in Adaptive Data Analysis     Hybrid Journal   (Followers: 7)
Advances in Artificial Intelligence     Open Access   (Followers: 15)
Advances in Calculus of Variations     Hybrid Journal   (Followers: 6)
Advances in Catalysis     Full-text available via subscription   (Followers: 5)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 19)
Advances in Computer Engineering     Open Access   (Followers: 4)
Advances in Computer Science : an International Journal     Open Access   (Followers: 14)
Advances in Computing     Open Access   (Followers: 2)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 59)
Advances in Engineering Software     Hybrid Journal   (Followers: 28)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 14)
Advances in Human Factors/Ergonomics     Full-text available via subscription   (Followers: 23)
Advances in Human-Computer Interaction     Open Access   (Followers: 21)
Advances in Materials Science     Open Access   (Followers: 15)
Advances in Operations Research     Open Access   (Followers: 12)
Advances in Parallel Computing     Full-text available via subscription   (Followers: 7)
Advances in Porous Media     Full-text available via subscription   (Followers: 5)
Advances in Remote Sensing     Open Access   (Followers: 51)
Advances in Science and Research (ASR)     Open Access   (Followers: 6)
Advances in Technology Innovation     Open Access   (Followers: 6)
AEU - International Journal of Electronics and Communications     Hybrid Journal   (Followers: 8)
African Journal of Information and Communication     Open Access   (Followers: 9)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 4)
AI EDAM     Hybrid Journal   (Followers: 1)
Air, Soil & Water Research     Open Access   (Followers: 14)
AIS Transactions on Human-Computer Interaction     Open Access   (Followers: 7)
Algebras and Representation Theory     Hybrid Journal   (Followers: 1)
Algorithms     Open Access   (Followers: 11)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 5)
American Journal of Computational Mathematics     Open Access   (Followers: 4)
American Journal of Information Systems     Open Access   (Followers: 6)
American Journal of Sensor Technology     Open Access   (Followers: 4)
Anais da Academia Brasileira de Ciências     Open Access   (Followers: 2)
Analog Integrated Circuits and Signal Processing     Hybrid Journal   (Followers: 7)
Analysis in Theory and Applications     Hybrid Journal   (Followers: 1)
Animation Practice, Process & Production     Hybrid Journal   (Followers: 5)
Annals of Combinatorics     Hybrid Journal   (Followers: 4)
Annals of Data Science     Hybrid Journal   (Followers: 12)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 12)
Annals of Pure and Applied Logic     Open Access   (Followers: 3)
Annals of Software Engineering     Hybrid Journal   (Followers: 13)
Annals of West University of Timisoara - Mathematics and Computer Science     Open Access  
Annual Reviews in Control     Hybrid Journal   (Followers: 8)
Anuario Americanista Europeo     Open Access  
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 2)
Applied and Computational Harmonic Analysis     Full-text available via subscription   (Followers: 1)
Applied Artificial Intelligence: An International Journal     Hybrid Journal   (Followers: 12)
Applied Categorical Structures     Hybrid Journal   (Followers: 5)
Applied Clinical Informatics     Hybrid Journal   (Followers: 2)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 14)
Applied Computer Systems     Open Access   (Followers: 2)
Applied Informatics     Open Access  
Applied Mathematics and Computation     Hybrid Journal   (Followers: 33)
Applied Medical Informatics     Open Access   (Followers: 11)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 5)
Applied Soft Computing     Hybrid Journal   (Followers: 17)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 7)
Applied System Innovation     Open Access  
Architectural Theory Review     Hybrid Journal   (Followers: 3)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 6)
Archive of Numerical Software     Open Access  
Archives and Museum Informatics     Hybrid Journal   (Followers: 152)
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 6)
arq: Architectural Research Quarterly     Hybrid Journal   (Followers: 8)
Artifact     Open Access   (Followers: 2)
Artificial Life     Hybrid Journal   (Followers: 7)
Asia Pacific Journal on Computational Engineering     Open Access  
Asia-Pacific Journal of Information Technology and Multimedia     Open Access   (Followers: 1)
Asian Journal of Control     Hybrid Journal  
Assembly Automation     Hybrid Journal   (Followers: 2)
at - Automatisierungstechnik     Hybrid Journal   (Followers: 1)
Australian Educational Computing     Open Access   (Followers: 1)
Automatic Control and Computer Sciences     Hybrid Journal   (Followers: 6)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Automatica     Hybrid Journal   (Followers: 12)
Automation in Construction     Hybrid Journal   (Followers: 7)
Autonomous Mental Development, IEEE Transactions on     Hybrid Journal   (Followers: 8)
Balkan Journal of Electrical and Computer Engineering     Open Access  
Basin Research     Hybrid Journal   (Followers: 5)
Behaviour & Information Technology     Hybrid Journal   (Followers: 51)
Big Data and Cognitive Computing     Open Access   (Followers: 3)
Biodiversity Information Science and Standards     Open Access  
Bioinformatics     Hybrid Journal   (Followers: 328)
Biomedical Engineering     Hybrid Journal   (Followers: 16)
Biomedical Engineering and Computational Biology     Open Access   (Followers: 13)
Biomedical Engineering, IEEE Reviews in     Full-text available via subscription   (Followers: 19)
Biomedical Engineering, IEEE Transactions on     Hybrid Journal   (Followers: 35)
Briefings in Bioinformatics     Hybrid Journal   (Followers: 51)
British Journal of Educational Technology     Hybrid Journal   (Followers: 160)
Broadcasting, IEEE Transactions on     Hybrid Journal   (Followers: 12)
c't Magazin fuer Computertechnik     Full-text available via subscription   (Followers: 1)
CALCOLO     Hybrid Journal  
Calphad     Hybrid Journal   (Followers: 2)
Canadian Journal of Electrical and Computer Engineering     Full-text available via subscription   (Followers: 15)
Capturing Intelligence     Full-text available via subscription  
Catalysis in Industry     Hybrid Journal   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 2)
Cell Communication and Signaling     Open Access   (Followers: 2)
Central European Journal of Computer Science     Hybrid Journal   (Followers: 5)
CERN IdeaSquare Journal of Experimental Innovation     Open Access   (Followers: 3)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 3)
Chemometrics and Intelligent Laboratory Systems     Hybrid Journal   (Followers: 15)
ChemSusChem     Hybrid Journal   (Followers: 7)
China Communications     Full-text available via subscription   (Followers: 8)
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
CIN Computers Informatics Nursing     Hybrid Journal   (Followers: 11)
Circuits and Systems     Open Access   (Followers: 15)
Clean Air Journal     Full-text available via subscription   (Followers: 1)
CLEI Electronic Journal     Open Access  
Clin-Alert     Hybrid Journal   (Followers: 1)
Clinical eHealth     Open Access  
Cluster Computing     Hybrid Journal   (Followers: 2)
Cognitive Computation     Hybrid Journal   (Followers: 3)
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 4)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 14)
Communication Methods and Measures     Hybrid Journal   (Followers: 13)
Communication Theory     Hybrid Journal   (Followers: 24)
Communications Engineer     Hybrid Journal   (Followers: 1)
Communications in Algebra     Hybrid Journal   (Followers: 3)
Communications in Computational Physics     Full-text available via subscription   (Followers: 2)
Communications in Information Science and Management Engineering     Open Access   (Followers: 4)
Communications in Partial Differential Equations     Hybrid Journal   (Followers: 4)
Communications of the ACM     Full-text available via subscription   (Followers: 51)
Communications of the Association for Information Systems     Open Access   (Followers: 16)
COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering     Hybrid Journal   (Followers: 3)
Complex & Intelligent Systems     Open Access   (Followers: 1)
Complex Adaptive Systems Modeling     Open Access  
Complex Analysis and Operator Theory     Hybrid Journal   (Followers: 2)
Complexity     Hybrid Journal   (Followers: 6)
Complexus     Full-text available via subscription  
Composite Materials Series     Full-text available via subscription   (Followers: 8)
Computación y Sistemas     Open Access  
Computation     Open Access   (Followers: 1)
Computational and Applied Mathematics     Hybrid Journal   (Followers: 3)
Computational and Mathematical Biophysics     Open Access   (Followers: 1)
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 2)
Computational and Structural Biotechnology Journal     Open Access   (Followers: 1)
Computational and Theoretical Chemistry     Hybrid Journal   (Followers: 9)
Computational Astrophysics and Cosmology     Open Access   (Followers: 1)
Computational Biology and Chemistry     Hybrid Journal   (Followers: 12)
Computational Chemistry     Open Access   (Followers: 2)
Computational Cognitive Science     Open Access   (Followers: 2)
Computational Complexity     Hybrid Journal   (Followers: 4)
Computational Condensed Matter     Open Access   (Followers: 1)
Computational Ecology and Software     Open Access   (Followers: 9)
Computational Economics     Hybrid Journal   (Followers: 9)
Computational Geosciences     Hybrid Journal   (Followers: 17)
Computational Linguistics     Open Access   (Followers: 24)
Computational Management Science     Hybrid Journal  
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 8)
Computational Mechanics     Hybrid Journal   (Followers: 5)
Computational Methods and Function Theory     Hybrid Journal  
Computational Molecular Bioscience     Open Access   (Followers: 2)
Computational Optimization and Applications     Hybrid Journal   (Followers: 8)
Computational Particle Mechanics     Hybrid Journal   (Followers: 1)
Computational Research     Open Access   (Followers: 1)
Computational Science and Discovery     Full-text available via subscription   (Followers: 2)
Computational Science and Techniques     Open Access  
Computational Statistics     Hybrid Journal   (Followers: 14)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 35)
Computer     Full-text available via subscription   (Followers: 105)
Computer Aided Surgery     Open Access   (Followers: 6)
Computer Applications in Engineering Education     Hybrid Journal   (Followers: 8)
Computer Communications     Hybrid Journal   (Followers: 16)

        1 2 3 4 5 6 7 | Last

Journal Cover
Applied Soft Computing
Journal Prestige (SJR): 1.199
Citation Impact (citeScore): 5
Number of Followers: 17  
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1568-4946
Published by Elsevier Homepage  [3161 journals]
  • Network-wide signal timing stochastic simulation optimization with
           environmental concerns
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Liang Zheng, Chengcheng Xu, Peter J. Jin, Bin Ran This study addresses a network-wide signal timing optimization problem with environmental concerns by a bi-objective stochastic simulation-based optimization (BOSSO) method. In this method, the global samples evaluated by costly simulation are used to build a type of surrogate model named the regressing Kriging model, which are then employed to predict bi-objectives of untested samples or filter noises from the evaluated samples. An adaptive selector is incorporated to determine which samples in the local trust-region are evaluated by costly simulation and which ones by the built regressing Kriging model. This helps to balance computational costs and accuracies of three quadratic regression models, especially when the variable dimension is high. The non-interactive role of a decision maker is taken to generate more non-dominated solutions around the desired bi-objective point. In the field experiments, an urban road network with 15 signalized and five non-signalized intersections in Changsha, China, is modeled as the simulation scenario by VISSIM. Then, the traffic simulation model is firstly calibrated from two aspects by the BOSSO method, which can well reproduce the reality. After that, the network-wide bi-objective signal timing optimization problem is also solved by the BOSSO method. Numerical results show that compared with the real-field traffic states, the total delay and vehicular emissions are reduced by at most 16.90% and 32.22% respectively under the budged number of simulations. Balance analyses also show the existence of a competing relationship between bi-objectives. Finally, the BOSSO method is validated to outperform three other counterparts (NSGA-II, BOTR and BOEGO) from various aspects.
  • Hybrid meta-model based search method for expensive problems
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Jichao Gu, Wenqi Li, Chang Shu, Fanbo Zeng The complexity and opaque characteristics of the practical expensive problems hinder the further applications of the single meta-model based optimization algorithms. In this work, a hybrid meta-model based search method (HMBSM) is presented. In this method, an important region is firstly constructed using a part of the expensive points which are evaluated by the expensive problems. Then, three meta-models with different fitting techniques are used together both in the important region and the remaining region. The whole design space will also be searched simultaneously to further avoid the local optima. Through intensive test by six benchmark math functions with the variables ranging from 10 to 24 and compared with the efficient global optimization (EGO), hybrid meta-model based design space management (HMDSM) method and multiple meta-model based design space differentiation (MDSD) method, the proposed HMBSM method shows excellent accuracy, efficiency and robustness. Then, the proposed method is applied in a vehicle lightweight design involving finite element analysis with 30 design variables, reducing 11.4 kg of weight.
  • A distance-type-insensitive clustering approach
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Xiaowei Gu, Plamen Angelov, Zhijin Zhao In this paper, we offer a method aiming to minimize the role of distance metric used in clustering. It is well known that distance metrics used in clustering algorithms heavily influence the end results and also make the algorithms sensitive to imbalanced attribute/feature scales. To solve these problems, a new clustering algorithm using a per-attribute/feature ranking operating mechanism is proposed in this paper. Ranking is a rarely used discrete, nonlinear operator by other clustering algorithms. However, it also has unique advantages over the dominantly used continuous operators. The proposed algorithm is based on the ranks of the data samples in terms of their spatial separation and is able to provide a more objective clustering result compared with the alternative approaches. Numerical examples on benchmark datasets prove the validity and effectiveness of the proposed concept and principles.
  • Inducing Hierarchical Multi-label Classification rules with Genetic
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Ricardo Cerri, Márcio P. Basgalupp, Rodrigo C. Barros, André C.P.L.F. de Carvalho Hierarchical Multi-Label Classification is a challenging classification task where the classes are hierarchically structured, with superclass and subclass relationships. It is a very common task, for instance, in Protein Function Prediction, where a protein can simultaneously perform multiple functions. In these tasks it is very difficult to achieve a high predictive performance, since hundreds or even thousands of classes with imbalanced data distributions have to be considered. In addition, the models should ideally be easily interpretable to allow the validation of the knowledge extracted from the data. This work proposes and investigates the use of Genetic Algorithms to induce rules that are both hierarchical and multi-label. Several experiments with different fitness functions and genetic operators are preformed to obtain different Hierarchical Multi-Label Classification rules. The different proposed configurations of Genetic Algorithms are evaluated together with state-of-the-art methods for HMC rule induction based on Ant Colony Optimization and Predictive Clustering Trees, using many datasets related to the Protein Function Prediction task. The experimental results show that it is possible to recommend the best configuration in terms of predictive performance and model interpretability.
  • A Quasi-Oppositional-Chaotic Symbiotic Organisms Search algorithm for
           global optimization problems
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Khoa H. Truong, Perumal Nallagownden, Zuhairi Baharudin, Dieu N. Vo This study proposes an improved version of the Symbiotic Organisms Search (SOS) algorithm called Quasi-Oppositional Chaotic Symbiotic Organisms Search (QOCSOS). This improved algorithm integrated Quasi-Opposition-Based Learning (QOBL) and Chaotic Local Search (CLS) strategies with SOS for a better quality solution and faster convergence. To demonstrate and validate the new algorithm’s effectiveness, the authors tested QOCSOS with twenty-six mathematical benchmark functions of different types and dimensions. In addition, QOCSOS optimized placements for distributed generation (DG) units in radial distribution networks and solved five structural design optimization problems, as practical optimization problems challenges. Comparative results showed that QOCSOS provided more accurate solutions than SOS and other methods, suggesting viability in dealing with global optimization problems.Graphical abstractGraphical abstract for this article
  • An intelligent water drops-based workflow scheduling for IaaS cloud
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Mainak Adhikari, Tarachand Amgoth Cloud computing is an emerging technology in a distributed environment with a collection of large-scale heterogeneous systems. One of the challenging issues in the cloud data center is to select the minimum number of virtual machine (VM) instances to execute the tasks of a workflow within a time limit. The objectives of such a strategy are to minimize the total execution time of a workflow and improve resource utilization. However, the existing algorithms do not guarantee to achieve high resource utilization although they have abilities to achieve high execution efficiency. The higher resource utilization depends on the reusability of VM instances. In this work, we propose a new intelligent water drops based workflow scheduling algorithm for Infrastructure-as-a-Service (IaaS) cloud. The objectives of the proposed algorithm are to achieve higher resource utilization and minimize the makespan within the given deadline and budget constraints. The first contribution of the algorithm is to find multiple partial critical paths (PCPs) of a workflow which helps in finding suitable VM instances. The second contribution is a scheduling strategy for PCP-VM assignment for assigning the VM instances. The proposed algorithm is evaluated through various simulation runs using synthetic datasets and various performance metrics. Through comparison, we show the superior performance of the proposed algorithm over the existing ones.
  • Tabu Search in revealing the internal state of RC4+ cipher
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Iwona Polak, Mariusz Boryczka Privacy and confidentiality can be secured by using cryptography. Thus, one needs to be sure that the ciphers being employed are secure and resistant to cryptanalysis. In the paper, a new state recovery attack on the RC4+ stream cipher is described. This attack is a plaintext attack where having both the plaintext and ciphertext the attacker can calculate the keystream what leads to revealing the internal state of the cipher. The presented approach is based on a Tabu Search with different types of neighbourhood and two types of aspiration proposed and considered. It requires checking 250 possible internal states on average in order to find the correct state. This is the best-known attack on RC4+. The other attacks that are known to date are a distinguishing attack and a differential fault analysis but only proposed approach can lead to revealing the internal state of RC4+ after checking so little permutations on average.
  • Hesitant 2-tuple linguistic Bonferroni operators and their utilization in
           group decision making
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Lidong Wang, Yanjun Wang, Witold Pedrycz Hesitant 2-tuple linguistic variable realizes a graded information approach to characterize the uncertainty of human cognition. This study is concerned with the development of new aggregation operators and aims to design a new group decision making approach to address the information fusion involving the interrelationship between aggregated terms and the prioritization relationship among decision makers under hesitant 2-tuple linguistic situation. Firstly, hesitant 2-tuple linguistic Bonferroni mean (H2TLBM) operator and prioritized weighted hesitant 2-tuple linguistic Bonferroni mean (PWH2TLBM) operator are established. Subsequently, some pertinent properties and special forms of the developed operators are studied in detail. To employ the proposed operators to solve group decision making problems, a novel TODIM (an acronym in Portuguese of interactive and multiple attribute decision making) method based on possibility degree is developed under the situation of hesitant 2-tuple linguistic information. The designed decision making method not only inherits the merits of the traditional TODIM approach, but also characterizes the interrelationship of criteria. The detailed process of solving problems is exemplified to highlight the practicality and feasibility of the designed method. Furthermore, comparative analysis with other methods is carried out to further offer insights on the designed decision method.Graphical abstractGraphical abstract for this article
  • Community robustness and its enhancement in interdependent networks
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Shuai Wang, Jing Liu The complex network attracts increasing attention in scientific researches and daily applications, and there has been an explosion of studies on the network theory. Concentrating on the functional information behind network structures, the community robustness has been proposed and dissected on single networks in recent studies. Currently, existing studies on the robustness of interdependent networks are limited to structural connections. As indicated by several practical dilemmas, the analyses on the invulnerability of communities and the maintenance of functional clusters in interdependent networked systems should be considered urgently. Therefore, in this paper, we first design a measure for numerically evaluating the community robustness of interdependent networks; and then guided by this measure, a memetic optimization algorithm, termed MA-CR inter, is proposed to successfully enhance the community robustness of various synthetic and real-world networks through rewiring topologies. Furthermore, two non-rewiring optimization strategies have also been modified and tested to make comparisons. The results show that these methods are valuable for enhancing networks’ community robustness as well, and different optimization strategies can be selected based on actual restraints and requirements Aiming at improving the attack tolerance of communities in interdependent networks, the obtained results facilitate the information mining from networked systems and may provide potential solutions to realistic optimization problems.
  • Analysis of high-dimensional genomic data employing a novel bio-inspired
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Santos Kumar Baliarsingh, Swati Vipsita, Khan Muhammad, Bodhisattva Dash, Sambit Bakshi Over the last decade, there has been a rapid growth in the generation and analysis of the genomics data. Though the existing data analysis methods are capable of handling a particular problem, they cannot guarantee to solve all problems with different nature. Therefore, there always lie a scope of a new algorithm to solve a problem which cannot be efficiently solved by the existing algorithms. In the present work, a novel hybrid approach is proposed based on the improved version of a recently developed bio-inspired optimization technique, namely, salp swarm algorithm (SSA) for microarray classification. Initially, the Fisher score filter is employed to pre-select a subset of relevant genes from the original high-dimensional microarray dataset. Later, a weighted-chaotic SSA (WCSSA) is proposed for the simultaneous optimal gene selection and parameter optimization of the kernel extreme learning machine (KELM) classifier. The proposed scheme is experimented on both binary-class and multi-class microarray datasets. An extensive comparison is performed against original SSA-KELM, particle swarm optimized-KELM (PSO-KELM), and genetic algorithm-KELM (GA-KELM). Lastly, the proposed method is also compared against the results of sixteen existing techniques to emphasize its capacity and competitiveness to successfully reduce the number of original genes by more than 98%. The experimental results show that the genes selected by the proposed method yield higher classification accuracy compared to the alternative techniques. The performance of the proposed scheme demonstrates its effectiveness in terms of number of selected genes (NSG), accuracy, sensitivity, specificity, Matthews correlation coefficient (MCC), and F-measure. The proposed WCSSA-KELM method is validated using a ten-fold cross-validation technique.
  • A group medical diagnosis model based on intuitionistic fuzzy soft sets
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Junhua Hu, Li Pan, Yan Yang, Haiwei Chen This paper is aimed at developing a group decision making (GDM) model for medical diagnosis via determining weights of experts based on a new similarity measure of intuitionistic fuzzy soft sets (IFSSs), and integrating the evaluation information by the weighted intuitionistic fuzzy soft Bonferroni mean (WIFSBM) operator. To address this problem, we first investigate the existing similarity measure and find out that it may lead to counter-intuitive results. Then we define an improved similarity measure for IFSSs and investigate its properties. A new model to estimate the weights of experts is based on similarity measure where the weights of experts are completely unknown. The weights of parameters are computed using the standard evaluation matrix by the maximizing deviation method. Considering the interactions among parameters, the integrated information for each alternative is obtained by the WIFSBM and the ranking index for each alternative is calculated by the score function. A case study is presented and a comparative analysis is conducted to illustrate the practicality and effectiveness of the proposed group medical diagnosis model under intuitionistic fuzzy soft environment.
  • Deep infrared pedestrian classification based on automatic image matting
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Yihui Liang, Han Huang, Zhaoquan Cai, Zhifeng Hao, Kay Chen Tan Infrared pedestrian classification plays an important role in advanced driver assistance systems. However, it encounters great difficulties when the pedestrian images are superimposed on a cluttered background. Many researchers design very deep neural networks to classify pedestrian from cluttered background. However, a very deep neural network associated with a high computational cost. The suppression of cluttered background can boost the performance of deep neural networks without increasing their depth, while it has received little attention in the past. This study presents an automatic image matting approach for infrared pedestrians that suppresses the cluttered background and provides consistent input to deep learning. The domain expertise in pedestrian classification is applied to automatically and softly extract foreground objects from images with cluttered backgrounds. This study generates trimaps, which must be generated manually in conventional approaches, according to the estimated positions of pedestrian’s head and upper body without the need for any user interaction. We implement image matting by adopting the global matting approach and taking the generated trimap as an input. The representation of pedestrian is discovered by a deep learning approach from the resulting alpha mattes in which cluttered background is suppressed, and foreground is enhanced. The experimental results show that the proposed approach improves the infrared pedestrian classification performance of the state-of-the-art deep learning approaches at a negligible computational cost.
  • Novel adaptive genetic algorithm sample consensus
    • Abstract: Publication date: Available online 8 February 2019Source: Applied Soft ComputingAuthor(s): Ehsan Shojaedini, Mahshid Majd, Reza Safabakhsh Random Sample Consensus (RANSAC) is a successful algorithm in model fitting applications when there are numerous outliers within the dataset. Achieving a proper model is guaranteed through the pure exploration strategy of RANSAC. However, finding the optimum result requires exploitation. Genetic Algorithm Sample Consensus (GASAC) is an evolutionary paradigm which adds the exploitation capability to RANSAC. Although GASAC improves the results of RANSAC, it has a fixed strategy for balancing between exploration and exploitation. In this paper, a new paradigm is proposed based on genetic algorithms using an adaptive strategy. We propose an adaptive genetic operator to select the proper number of high fitness individuals as parents and mutate the rest. This operator can adjust the ratio of exploration vs. exploitation phases according to the amount of outliers. Also, a learning method is proposed for the mutation operator to gradually learn which gene is the best replacement for the mutated gene. This operator guides the exploration phase towards good solution areas and therefore produces better individuals for further exploitation. The proposed method is extensively evaluated in two sets of experiments. In all tests, our method outperformed the other methods in terms of both the number of inliers found and the speed of the algorithm.
  • Deep belief network-based AR model for nonlinear time series forecasting
    • Abstract: Publication date: Available online 7 February 2019Source: Applied Soft ComputingAuthor(s): Wenquan Xu, Hui Peng, Xiaoyong Zeng, Feng Zhou, Xiaoying Tian, Xiaoyan Peng For a class of nonlinear time series whose dynamic behavior smoothly changes with the system state, a state-dependent auto-regressive (SD-AR) model is proposed to characterize the nonlinear time series. A set of deep belief networks (DBNs) is used to build the state-dependent functional coefficients of the SD-AR model, and the proposed model is called DBN-AR model, which combines the advantage of DBN in function approximation and the merit of SD-AR model in nonlinear dynamics description. The DBN-AR model is driven by the state signal changing with time. Based on the least squares solution with minimum norm and the pseudo inverse matrix approach, the initial target values of the DBNs are determined in pre-training stage. In fine tuning stage, all parameters of DBN-AR model is finally tuned by the back propagation (BP) algorithm designed for fine-tuning of DBN-AR model. Through experiment and comparative study on the sunspot data, the electricity load demand data sets from Australian Energy Market Operator (AEMO), the weekly British Pound/US dollar (GBP/USD) exchange rate data and the daily electricity generation data of the Three Gorges dam right bank power station, it is shown that the DBN-AR model is superior to some existing models or methods in prediction accuracy.
  • Symbiotic organisms search algorithm for optimal evolutionary controller
           tuning of fractional fuzzy controllers
    • Abstract: Publication date: Available online 6 February 2019Source: Applied Soft ComputingAuthor(s): Yongquan Zhou, Fahui Miao, Qifang Luo With the development of technology and the practical needs of complex engineering applications, fuzzy controllers have been widely applied. In contrast to a traditional integer-order fuzzy controller, a fractional fuzzy controller can extend the integral and differential order of a fuzzy controller to any real number, which describes the controlled object more accurately and enhances its control performance. However, a fractional fuzzy controller has a larger number of control parameters, which makes it difficult to calibrate. Because the parameter controller tuning values of the fuzzy controller clearly influence its control performance, this paper proposes to optimize the parameter controller tuning process using the symbiotic organisms search algorithm. A large number of simulation tests were carried out to compare the symbiotic organisms search-based parameter controller tuning method with parameter controller tuning based on five other representative swarm intelligence algorithms. The experimental results show that the symbiotic organisms search algorithm better optimizes the parameters of the fractional fuzzy controller.
  • A probabilistic finite state machine based strategy for multi-target
           search using swarm robotics
    • Abstract: Publication date: Available online 6 February 2019Source: Applied Soft ComputingAuthor(s): Jie Li, Ying Tan As a distributed system, swarm robotics is well suited for the multi-target search task where a single robot is rather inefficient. In this paper, a model of the multi-target search problem in swarm robotics and its approximate mathematical representation are given, based on which a lower bound of the expected number of iterations is drawn. Two categories of behavior-based strategies for target search are introduced: one is inspired from swarm intelligence optimization while the other from random walk. A novel search strategy based on probabilistic finite state machine is put forward, showing the highest efficiency in all presented algorithms, which is very close to the optimal value in situations with a large number of robots. It has been demonstrated by extensive experiments that the novel strategy has excellent stability, striking a good balance between exploration and exploitation, as well as a good trade-off between parallelism and cooperative capability.
  • Many objective cooperative bat searching algorithm
    • Abstract: Publication date: Available online 5 February 2019Source: Applied Soft ComputingAuthor(s): Haopeng Zhang, Qing Hui In this paper, a many objective cooperative bat searching algorithm (MOCBA) is proposed to solve many-objective optimization problems by using the balanceable fitness estimation method. Similar to the particle swarm optimization (PSO) algorithm and the evolutionary algorithm (EA), the cooperative bat searching algorithm (CBA) is a recently developed swarm intelligence optimization algorithm to efficiently solve single-objective optimization problems. With the balanceable fitness estimation method, the MOCBA balances the diversity ability and convergence ability of the algorithm during searching process. Moreover, the convergence issue for MOCBA is also studied. The results on convergence in mean and convergence in probability of the MCOBA are presented. Experimental results are provided to demonstrate the effectiveness of the proposed MOCBA by comparing with fourteen state-of-the-art many-objective optimization algorithms by solving benchmark functions: DTLZ1–DTLZ5 and WFG1–WFG9. By calculating the means, standard deviations and running the Wilcoxon rank sum tests and the Friedmans tests of 100 algorithm executions, the proposed MOCBA shows superior performance among all the fifteen algorithms.
  • An approach of TOPSIS technique for developing supplier selection with
           group decision making under type-2 neutrosophic number
    • Abstract: Publication date: Available online 5 February 2019Source: Applied Soft ComputingAuthor(s): Mohamed Abdel-Basset, M. Saleh, Abduallah Gamal, Florentin Smarandache This paper proposes an advanced type of neutrosophic technique, called type 2 neutrosophic numbers, and defines some of its operational rules. The type 2 neutrosophic number weighted averaging operator is determined in order to collective the type 2 neutrosophic number set, inferring some properties of the suggested operator. The operator is employed in a MADM problem to collect the type 2 neutrosophic numbers based classification values of each alternative over the features. The convergent classification values of every alternative are arranged with the assistance of score and accuracy values with the aim to detect the superior alternative. We introduce an illuminating example to confirm the suggested approach for multi attribute decision making issues, ordering the alternatives based on the accuracy function. Selecting an appropriate alternative among the selection options is a difficult activity for decision makers, since it is complicated to express attributes as crisp numbers. To tackle the problem, type 2 neutrosophic numbers can be efficiently used to estimate information in the decision making process. The type 2 neutrosophic numbers can accurately describe real cognitive information. We propose a novel T2NN-TOPSIS strategy combining type 2 neutrosophic numbers and TOPSIS under group decision making as application of T2NN, suggesting a type 2 neutrosophic number expression for linguistic terms. Finally, we provide a real case dealing with a decision making problem based on the proposed T2NN-TOPSIS methodology to prove the efficiency and the applicability of the type 2 neutrosophic number.
  • Design and implementation of the fuzzy expert system in Monte Carlo
           methods for fuzzy linear regression
    • Abstract: Publication date: Available online 4 February 2019Source: Applied Soft ComputingAuthor(s): Duygu İçen, Süleyman Günay In this study, fuzzy expert system (FES) in Monte Carlo (MC) method, which is used for estimating fuzzy linear regression model (FLRM) parameters, is applied to determine the parameter intervals, for the first time in the literature. MC method in estimating FLRM parameters is a new field of study that is very useful and time saving. However a major problem might occur in determining the parameter intervals from which the regression model parameters are supposed to come. If the intervals are calculated too large, FLRM error will be very large. Accordingly, the actual model parameters will not be obtained if the intervals are calculated too narrow. This drawback has not been addressed in the literature before and only optimization methods have been applied to achieve the best interval values. In this article, the FES is used for the first time in order to solve the problem in parameter estimation process for the FLRM in the field of statistics. For this purpose, the difference between the fuzzy observation value and fuzzy estimation value’s support set (W) is taken into account. The most appropriate intervals calculated for the parameters are those that make W as small as possible. Thus, FES is designed to determine the best intervals for the model parameters. The system knowledge base is composed of 7 fuzzy rules. As a result, it is deduced that the FLRM parameter estimates obtained from the MC method using FES are very close to the real values. The real impact of this paper will be in showing the applicability of FESs in order to solve problems that we encounter in the field of statistics by the help of linguistic expressions. Moreover, these outcomes will be useful for enriching the studies that have already focused on FLRMs and will encourage researchers to use FES to solve problems in statistics. To sum up, this study demonstrates that FESs which is used in technological devices and makes our lives easier can also be used in solving problems that we confront in the field of statistics efficiently with using linguistic expressions like human inference system.
  • Social relationships classification using social contextual features and
           SVDD-based metric learning
    • Abstract: Publication date: Available online 4 February 2019Source: Applied Soft ComputingAuthor(s): Xiaoqian Qin, Dakun Liu, Dong Wang Family relationship is an important concern in image-based social relationships recognition, and there are very limited attempts to tackle diverse social relationships in the literature. In this paper, we propose the problem of social relationships classification in which we aim to model three types of social relationships( e.g., family, colleagues and friends) in the images. To this end, we introduce two types of social contextual features to capture detailed information( e.g., geometry or appearance) in images. Moreover, we present a new Support Vector Data Description-based metric learning( SML) method for social relationships classification. Motivated by the fact that the images are unavoidably degraded by noise due to some variation factors such as illumination and pose, we aim to learn a robust distance metric to suppress noise and model the spatial structure among multiple entities, such that more discriminative information can be exploited for classification. We also extend our method to multiview version-MSML, which helps to exploit multiple features to improve the social relationships classification performance. Extensive experiments on our newly released social relationships database demonstrate the feasibility and effectiveness of our proposed methods.
  • The two-echelon multi-trip vehicle routing problem with dynamic satellites
           for crop harvesting and transportation
    • Abstract: Publication date: Available online 1 February 2019Source: Applied Soft ComputingAuthor(s): Pengfei He, Jing Li The harvesting and transportation system involves a harvest scheduling and a transportation plan. The grain, harvested by combine-harvesters, is then transported by transporters from disperse farmlands to the depot. The spot where combine-harvesters transfer wheat to transporters is dynamic because the location of these spots correspond with combine-harvesters’ work. In this paper, the harvesting and transportation problem is considered as a two-echelon multi-trip vehicle routing problem with a dynamic satellite (2E-MTVRPDS) because the combine-harvester is used multiple times in the planning horizon and the transporter is used multiple times in a work day. The mixed integer linear programming model is proposed based on the features of the problem. This work presents an optimum solution with a heuristic algorithm. The dynamic satellite is transferred as the static case in the heuristic. The computational experiments are constructed to test the performances of the proposed algorithm. Five instances with different sizes are adopted to test the stability of the algorithm. The calculation deviation of testing instances is acceptable. On one hand, the optimal effectiveness can be achieved when the number of instances is less than 200. With the increase in the number of instances, the optimal efficiency declines. On the other hand, the optimal solution appears to have a time window of 0.2 hr in all instances with different sizes. This study provides a decision model for agricultural production to implement optimal harvesting operations.
  • A new soft computing model for estimating and controlling blast-produced
           ground vibration based on Hierarchical K-means clustering and Cubist
    • Abstract: Publication date: Available online 1 February 2019Source: Applied Soft ComputingAuthor(s): Hoang Nguyen, Xuan-Nam Bui, Quang-Hieu Tran, Ngoc-Luan Mai Blasting is an essential task in open-pit mines for rock fragmentation. However, its dangerous side effects need to be accurately estimated and controlled, especially ground vibration as measured in the form of peak particle velocity (PPV). The accuracy for estimating blast-induced PPV can be improved by hybrid artificial intelligence approach. In this study, a new hybrid model was developed based on Hierarchical K-means clustering (HKM) and Cubist algorithm (CA), code name HKM-CA model. The HKM clustering hybrid technique was used to separate data according to their characteristics. Subsequently, the Cubist model was trained and developed on the clusters generated by HKM. Empirical technique, the benchmark algorithms [random forest (RF), support vector machine (SVM), classification and regression tree (CART)], and single CA model were also established for benchmarking the HKM-CA model. Root-mean-square error (RMSE), determination coefficient (R2), and mean absolute error (MAE) were the key indicators used for evaluating the model performance. The results revealed that the proposed HKM-CA model was a powerful tool for improving the accuracy of the CA model. Specifically, the HKM-CA model yielded a superior result with an RMSE of 0.475, R2 of 0.995, and MAE of 0.373 in comparison to other models. The proposed HKM-CA model has the potential to be used for predicting blast-induced PPV on-site to control undesirable effects on the surrounding environment.
  • Gaussian kernels for incomplete data
    • Abstract: Publication date: Available online 31 January 2019Source: Applied Soft ComputingAuthor(s): Diego P.P. Mesquita, João P.P. Gomes, Francesco Corona, Amauri H. Souza, Juvêncio S. Nobre This paper discusses a method to estimate the expected value of the Gaussian kernel in the presence of incomplete data. We show how, under the general assumption of a missing-at-random mechanism, the expected value of the Gaussian kernel function has a simple closed-form solution. Such a solution depends only on the parameters of the Gamma distribution which is assumed to represent squared distances. Furthermore, we show how the parameters governing the Gamma distribution depend only on the non-central moments of the kernel arguments, via the second-order moments of their squared distance, and can be estimated by making use of any parametric density estimation model of the data distribution. We approximate the data distribution with the maximum likelihood estimate of a Gaussian mixture distribution. The validity of the method is empirically assessed, under a range of conditions, on synthetic and real problems and the results compared to existing methods. For comparison, we consider methods that indirectly estimate a Gaussian kernel function by either estimating squared distances or by imputing missing values and then computing distances. Based on the experimental results, the proposed method consistently proves itself an accurate technique that further extends the use of Gaussian kernels with incomplete data.
  • Mobile robot path planning using membrane evolutionary artificial
           potential field
    • Abstract: Publication date: Available online 31 January 2019Source: Applied Soft ComputingAuthor(s): Ulises Orozco-Rosas, Oscar Montiel, Roberto Sepúlveda In this paper, a membrane evolutionary artificial potential field (memEAPF) approach for solving the mobile robot path planning problem is proposed, which combines membrane computing with a genetic algorithm (membrane-inspired evolutionary algorithm with one-level membrane structure) and the artificial potential field method to find the parameters to generate a feasible and safe path. The memEAPF proposal consists of delimited compartments where multisets of parameters evolve according to rules of biochemical inspiration to minimize the path length. The proposed approach is compared with artificial potential field based path planning methods concerning to their planning performance on a set of twelve benchmark test environments, and it exhibits a better performance regarding path length. Experiments to demonstrate the statistical significance of the improvements achieved by the proposed approach in static and dynamic environments are shown. Moreover, the implementation results using parallel architectures proved the effectiveness and practicality of the proposal to obtain solutions in considerably less time.Graphical abstractGraphical abstract for this article
  • A genetic programming-based regression for extrapolating a blood
           glucose-dynamics model from interstitial glucose measurements and their
           first derivatives
    • Abstract: Publication date: Available online 31 January 2019Source: Applied Soft ComputingAuthor(s): I. De Falco, A. Della Cioppa, A. Giugliano, A. Marcelli, T. Koutny, M. Krcma, U. Scafuri, E. Tarantino This paper illustrates the development and the applicability of an Evolutionary Computation approach to enhance the treatment of Type-1 diabetic patients that necessitate insulin injections. In fact, being such a disease associated to a malfunctioning pancreas that generates an insufficient amount of insulin, a way to enhance the quality of life of these patients is to implement an artificial pancreas able to artificially regulate the insulin dosage. This work aims at extrapolating a regression model, capable of estimating the blood glucose (BG) through interstitial glucose (IG) measurements and their numerical first derivatives. Such an approach represents a viable preliminary stage in building the basic component of this artificial pancreas. In particular, considered the high complexity of the reciprocal interactions, an evolutionary-based strategy is outlined to extrapolate a mathematical relationship between BG and IG and its derivative. The investigation is carried out about the accuracy of personalized models and of a global relationship model for all of the subjects under examination. The discovered models are assessed through a comparison with other models during the experiments on personalized and global data.Graphical abstractGraphical abstract for this article
  • Secure Routing Protocol based on Multi-objective Ant-colony-optimization
           for wireless sensor networks
    • Abstract: Publication date: Available online 31 January 2019Source: Applied Soft ComputingAuthor(s): Ziwen Sun, Min Wei, Zhiwei Zhang, Gang Qu To solve the problem of achieving the maximum network security goal with lower energy consumption in wireless sensor networks, this paper proposes a Secure Routing Protocol based on Multi-objective Ant-colony-optimization (SRPMA) for wireless sensor networks. The ant colony algorithm is improved to be a multi-objective routing algorithm with considering the residual energy of nodes and the trust value of a rout path as two optimization objectives, in which a rout path is produced through the multi pheromone information and the multi heuristic information consisting of two objective functions. The node trust evaluation model is established by using an improved D-S evidence theory with confliction preprocessing to evaluate nodes trust degree. The multi-objective routing result is obtained by using the Pareto optimal solution mechanism by using the external archive method with a crowding distance criterion. The simulation results conducted with NS2 show that the proposed algorithm can achieve desired performance against the black hole attack in WSN routing.
  • Bi-sparse optimization-based least squares regression
    • Abstract: Publication date: Available online 30 January 2019Source: Applied Soft ComputingAuthor(s): Zhiwang Zhang, Jing He, Guangxia Gao, Yingjie Tian For forecasting by regression, more and more instances and features are collected and added to the regression models. When there are many noisy and redundant instances and features, these models often give the poor predictive accuracy and interpretability owing to overfitting and computational complexity. Besides, least squares support vector regression (LSSVR) can hardly obtain sparse solutions and identify important instances and features from data. In this paper, a novel bi-sparse optimization-based least squares regression (BSOLSR) method is proposed in the framework of LSSVR. Based on the new row and column kernel matrices, the ℓ0-norm sparsification function is introduced to the LSSVR model. By alternatively solving two unconstrained quadratic programming problems or two systems of linear equations, BSOLSR can predict output values for given input points and provide interpretable results by simultaneous selecting relevant and important instances and features. As shown in the experimental results on real data sets and comparison with SVR, ℓ1-norm SVR (L1SVR), LSSVR, and multiple kernel learning SVR (MKLSVR), the proposed BSOLSR can effectively increase predictive accuracy, discover representative instances and important features, and gain the interpretable results, which are very critical for many real-world applications.
  • Development of a decision support system for neuro-electrostimulation:
           Diagnosing disorders of the cardiovascular system and evaluation of the
           treatment efficiency
    • Abstract: Publication date: Available online 29 January 2019Source: Applied Soft ComputingAuthor(s): Vladimir Kublanov, Anton Dolganov The study describes a preliminary stage of the decision support system development for physician performing neuro-electrostimulation of neck neural formations for patients suffering from cardiovascular system disorders. The arterial hypertension was used as the clinical model of the disorders. The study consisted of two steps: diagnosing of the arterial hypertension and an evaluation of the treatment efficiency during the neuro-electrostimulation application. For the diagnosing part, a clinical study was conducted involving heart rate variability signals recorded while performing tilt-test functional load. Heart rate variability signal is an indirect mean of accessing autonomic nervous system functioning. Disturbances of the autonomic nervous system are essential in pathology of arterial hypertension. Performance of different machine learning techniques and feature selection strategies in task of binary classification (healthy volunteers and patients suffering from arterial hypertension) were compared. The genetic programming feature selection and quadratic discriminant analysis classifier reached the highest classification accuracy. Best feature combinations were used to evaluate treatment efficiency. Predictions based on the selected heart rate variability features have a high level of agreement with the arterial pressure dynamics. The results indicate the potential of the proposed decision support system.Graphical abstractGraphical abstract for this article
  • Enhanced crow search algorithm for optimum design of structures
    • Abstract: Publication date: Available online 28 January 2019Source: Applied Soft ComputingAuthor(s): Armin Javidi, Eysa Salajegheh, Javad Salajegheh In this study, the capability of recently introduced crow search algorithm (CSA) was evaluated for structural optimization problems. It is observed that the standard CSA was led to undesirable performance for solving structural optimization problems. Accordingly, three modifications were made on the standard CSA to obtain the enhanced crow search algorithm (ECSA) while no parameter was added to the ECSA. First, each violated decision variable was replaced by corresponding decision variable of the global best solution. Second, a free-fly mechanism was suggested for constraint handling. Third, the personal upper bound strategy (PUBS) was proposed for elimination of inessential structural analyses. To assess the efficacy of the proposed modifications, four popular benchmark structures were employed and each modification was added to the CSA in a separate stage and then its effects were illustrated. The results of benchmark structures were examined in terms of minimum weight, convergence rate, and reliability. The results confirmed that the ECSA was significantly better than the standard CSA. Moreover, the ECSA obtained better or very competitive results in comparison with well-known and other newly developed metaheuristic methods.
  • Identifying key players in large social networks by using a
           multi-objective artificial bee colony optimization approach
    • Abstract: Publication date: Available online 24 January 2019Source: Applied Soft ComputingAuthor(s): Dimas de la Fuente, Miguel A. Vega-Rodríguez, Carlos J. Pérez Identifying a set of individuals that have an influential relevance and act as key players is a matter of interest in many real world situations, especially in those related to the Internet. Although several approaches have been proposed in order to identify key players sets, they mainly focus just on the optimization of a single objective. This may lead to a poor performance since the sets identified are not usually able to perform well in real life applications where more objectives of interest are taken into account. Multi-objective optimization seems the natural extension for this task, but there is a lack of this type of methodologies in the scientific literature. An efficient Multi-Objective Artificial Bee Colony (MOABC) algorithm is proposed to address the key players identification problem and is applied in the context of six networks of different dimensions and characteristics. The proposed approach is able to best identify the key players than the ones previously proposed, especially in the context of large social networks. The model performance of the proposed approach has been evaluated according to different quality metrics. The results from the MOABC execution show important improvements with respect to the best multi-objective results in the scientific literature, specifically, in average, 13.20% of improvement in Hypervolume, 120.39% in Coverage Relation and 125.52% in number of non-dominated solutions. Even more, the proposed algorithm is also more robust when repeating executions.
  • Bayesian fuzzy hypothesis test in wireless sensor networks with noise
    • Abstract: Publication date: Available online 24 January 2019Source: Applied Soft ComputingAuthor(s): Abdolreza Mohammadi, S. Hamed Javadi, Domenico Ciuonzo Reliable event detection is an essential task of wireless sensor networks (WSNs) in which there are different types of uncertainty. In this paper, we consider a decentralized detection problem for a WSN and use fuzzy hypothesis test (FHT) in the Bayesian perspective to model the noise power uncertainty. FHT employs membership functions as hypotheses for modeling and analyzing the uncertainty. Using Bayesian FHT (BFHT), a local detector scheme is proposed at each sensor node in which the threshold depends on the noise power uncertainty bound. Local decisions of sensors are sent to the fusion center (FC) and combined to make a final decision about the absence/presence of the event. The proposed algorithm is evaluated in terms of probabilities of detection and false alarm. Simulations show that the proposed BFHT detector considerably outperforms the Anderson-Darling method as well as the conventional energy detector in the presence of the noise power uncertainty.Graphical abstractGraphical abstract for this article
  • Stacking-based ensemble learning of decision trees for interpretable
           prostate cancer detection
    • Abstract: Publication date: Available online 23 January 2019Source: Applied Soft ComputingAuthor(s): Yuyan Wang, Dujuan Wang, Na Geng, Yanzhang Wang, Yunqiang Yin, Yaochu Jin Prostate cancer is a highly incident malignant cancer among men. Early detection of prostate cancer is necessary for deciding whether a patient should receive costly and invasive biopsy with possible serious complications. However, existing cancer diagnosis methods based on data mining only focus on diagnostic accuracy, while neglecting the interpretability of the diagnosis model that is necessary for helping doctors make clinical decisions. To take both accuracy and interpretability into consideration, we propose a stacking-based ensemble learning method that simultaneously constructs the diagnostic model and extracts interpretable diagnostic rules. For this purpose, a multi-objective optimization algorithm is devised to maximize the classification accuracy and minimize the ensemble complexity for model selection. As for model combination, a random forest classifier-based stacking technique is explored for the integration of base learners, i.e., decision trees. Empirical results on real-world data from the General Hospital of PLA demonstrate that the classification performance of the proposed method outperforms that of several state-of-the-art methods in terms of the classification accuracy, sensitivity and specificity. Moreover, the results reveal that several diagnostic rules extracted from the constructed ensemble learning model are accurate and interpretable.
  • A semi-supervised auto-encoder using label and sparse regularizations for
    • Abstract: Publication date: Available online 23 January 2019Source: Applied Soft ComputingAuthor(s): Zhilei Chai, Wei Song, Huiling Wang, Fei Liu The semi-supervised auto-encoder (SSAE) is a promising deep-learning method that integrates the advantages of unsupervised and supervised learning processes. The former learning process is designed to extract the underlying concepts of data as intrinsic information and enhance its generalization ability to express data. Furthermore, the supervised process tends to describe the rules of categorization with labels that further improve categorization accuracy. In this paper, we propose a novel semi-supervised learning method, namely, label and sparse regularization AE (LSRAE), by integrating label and sparse constraints to update the structure of the AE. The sparse regularization activates a minority of important neurons, while most of the other neurons are inhibited. Such a method ensures that LSRAE can yield a more local and informative structure of the data. Moreover, by implementing the label constraint, the supervised learning process can extract the features regulated by category rules and enhance the performance of the classifier in depth. To extensively test the performances of LSRAE, we perform our experiments on the benchmark datasets USPS, ISOLET and MNIST. The experimental results demonstrate the superiority of LSRAE in comparison with state-of-the-art feature extraction methods including AE, LSAE, SAE, ELM, DBN, and adaptive DBN.
  • A new method for probabilistic linguistic multi-attribute group decision
           making: Application to the selection of financial technologies
    • Abstract: Publication date: Available online 22 January 2019Source: Applied Soft ComputingAuthor(s): Xiao-Bing Mao, Min Wu, Jiu-Ying Dong, Shu-Ping Wan, Zhen Jin “No technology, no financial” has been the consensus in banking industry. Under the background of financial technology (Fintech), how to select an appropriate technology company to cooperate for the banks has become a key. The technology company selection can be regarded as a kind of multi-attribute group decision making (MAGDM) problems. The probabilistic linguistic term set (PLTS) is a useful tool to express decision makers’ (DMs’) evaluations in the technology company selection. This paper develops a new method for MAGDM with PLTSs. Firstly, the possibility degree and range value of PLTSs are defined. Then a possibility degree algorithm is designed for ranking PLTSs. An Euclidean distance measure between PLTSs is presented and extended to probabilistic linguistic matrices. Based on Archimedean t-norm and s-norm, some new operational laws for PLTSs are defined and some desirable properties are discussed. Then, a generalized probabilistic linguistic Hamacher weighted averaging (GPLHWA) operator and a generalized probabilistic linguistic Hamacher ordered weighted averaging (GPLHOWA) operator are developed. Some useful properties for these operators are investigated in detail. Combined with the subjective weights of DMs, the DMs’ weights are determined by the adjusted coefficients. Using the GPLHWA operator, the collective decision matrix is obtained by aggregating all the individual decision matrices. By maximizing the total weighted square possibility degree, a multi-objective program is constructed to derive the attribute weights. The ranking order of alternatives is generated by integrating ELECTRE and TOPSIS. Thereby, a new method is put forward for MAGDM with PLTSs. A Fintech example is analyzed to show the effectiveness of the proposed method. The sensitivity analysis and comparative analyses are conducted to illustrate its advantages.
  • An application of phonocardiography signals for psychological stress
           detection using non-linear entropy based features in empirical mode
           decomposition domain
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Amandeep Cheema, Mandeep Singh The well-established association of psychological stress and pathogenesis emphasizes the need for early detection of psychological stress to prevent the progression of diseases and hence saving human lives. The purpose of this research paper is to present a new framework for using phonocardiography (PCG) signal to detect psychological stress based on non-linear entropy based features extracted using empirical mode decomposition (EMD). These PCG signals are used to extract time duration of cardiac cycles consisting of consecutive S1 peaks to form Inter-beat Interval (IBI) signal. The IBI signal is decomposed to sub-band signals using EMD to form Intrinsic Mode Functions (IMFs). Then non-linear features namely — Permutation Entropy (PEn), Fuzzy Entropy (FzEn) and K-Nearest Neighbour (K-NN) entropy estimator are evaluated. Ranking methods namely — Entropy method, Bhattacharya space algorithm, Receiver Operating Characteristic (ROC) method and Wilcoxon method are then used in order to optimize the system. The extracted entropy features are fed to Least-Square Support Vector Machine (LS-SVM) for classification and highest accuracy, sensitivity and specificity obtained using the proposed system is 96.67%, 100% and 93.33% respectively. The proposed system opens a new research area of using PCG signal for psychological stress detection which can be easily used for home-care and is relatively inexpensive in comparison to other biophysical measures like Electroencephalography (EEG) and Electrocardiography (ECG).
  • Structural damage detection using imperialist competitive algorithm and
           damage function
    • Abstract: Publication date: April 2019Source: Applied Soft Computing, Volume 77Author(s): Saleheh Gerist, Mahmoud R. Maheri In practical damage detection problems, experimental modal data is only available for a limited number of modes and in each mode, only a limited number of nodal points are recorded. In using modal data, the majority of the available damage detection solution techniques either require data for all the modes, or all the nodal data for a number of modes; neither of which may be practically available through experiments. In the present study, damage identification is carried out using only a limited number of nodal data of a limited number of modes. The proposed method uses the imperialist competitive optimization algorithm and damage functions. To decrease the number of design variables, several bilinear damage functions are defined to model the damage distribution. Damage functions with both variable widths and variable weights are proposed for increased accurately. Four different types of objective functions which use modal responses of damaged structure are investigated with the aim of finding the most suitable function. The efficiency of the proposed method is investigated using three benchmark numerical examples using both clean and noisy modal data. It is shown that by only using a limited number of modal data, the proposed method is capable of accurately detecting damage locations and reasonably accurately evaluate their extents. The proposed algorithm is most effective with noisy modal data, compared to other available solutions.Graphical abstractGraphical abstract for this article
  • Multichannel image contrast enhancement based on linguistic rule-based
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Hoang Huy Ngo, Cat Ho Nguyen, Van Quyen Nguyen This study follows the direct approach to image contrast enhancement, which changes the image contrast at each its pixel and is more effective than the indirect approach that deals with image histograms. However, there are only few studies following the direct approach because, by its nature, it is very complex. Additionally, it is difficult to develop an effective method since it is required to keep a balance in maintaining local and global image features while changing the contrast at each individual pixel. Moreover, raw images obtained from many sources randomly influenced by many external factors can be considered as fuzzy uncertain data. In this context, we propose a novel method to apply and immediately handle expert fuzzy linguistic knowledge of image contrast enhancement to simulate human capability in using natural language. The formalism developed in the study is based on hedge algebras considered as a theory, which can immediately handle linguistic words of variables. This allows the proposed method to produce an image contrast intensificator from a given expert linguistic rule base. A technique to preserve global as well as local image features is proposed based on a fuzzy clustering method, which is applied for the first time in this field to reveal region image features of raw images. The projections of the obtained clusters on each channel are suitably aggregated to produce a new channel image considered as input of the pixelwise defined operators proposed in this study. Many experiments are performed to demonstrate the effect of the proposed method versus the counterparts considered.Graphical abstractGraphical abstract for this article
  • Online distributed fuzzy modeling of nonlinear PDE systems: Computation
           based on adaptive algorithms
    • Abstract: Publication date: Available online 18 January 2019Source: Applied Soft ComputingAuthor(s): Mohammad Mehdi Mardani, Mokhtar Shasadeghi, Behrouz Safarinejadian, Tomislav Dragiĉević With the emergence of novel model-based controllers for partial differential equation (PDE) systems, identifying the mathematical model of PDE systems has become a promising and complicated research topic. This paper suggests a new method to identify an adaptive Takagi-Sugeno (TS) fuzzy PDE model for nonlinear multi-input multi-output (MIMO) first-order PDE systems. The proposed approach is performed online based on the measured input and output data of the nonlinear PDE systems. Furthermore, the identification process will be obtained for the cases that the noise is either white or colored. For the case of white noise, a nonlinear recursive least square (NRLS) approach is applied to identify the nonlinear system. On the other hand, when the colored noise is exerted to the nonlinear PDE system, the fuzzy PDE model of the nonlinear PDE system and also nonlinear colored noise are identified based on the nonlinear extended matrix methods (NEMM). Moreover, the problem of identification for both colored and white noise cases is investigated when premise variables of membership functions are known or unknown. Finally, in order to illustrate the effectiveness and merits of the proposed methods, the identification method is applied to a practical nonisothermal Plug-Flow reactor (PFR) and a hyperbolic PDE system with Ltka-Volterra type applications. As it is expected, the evolutions of the error between the state variables for the obtained TS fuzzy PDE model and the output data converge to the zero in the steady-state conditions. Thus one concludes, the proposed identification algorithm can accurately adjust both consequents and antecedents parameters of TS fuzzy PDE model.
  • Facial expression recognition using distance and texture signature
           relevant features
    • Abstract: Publication date: Available online 18 January 2019Source: Applied Soft ComputingAuthor(s): Asit Barman, Paramartha Dutta Distance and texture characteristics among the landmark points reflected in human faces are important features in so far as the recognition of human faces is concerned. In this article we consider (i) normalized distance signature obtained from Active Appearance Model (AAM) based grid, (ii) normalized texture signature derived from salient landmarks within the grid, (iii) stability indices arising out of these signatures and (iv) relevant statistical measures as the set of features for training of artificial neural models such as Multilayer Perceptron (MLP), Radial Basis Function Network (RBF), Nonlinear AutoRegressive with eXogenous input (NARX) and Convolutional Neural Network (CNN) to achieve the task of recognition of facial expressions. The Cohn-Kanade (CK+), Japanese Female Facial Expression (JAFFE), MMI and MUG benchmark databases are used to conduct the experiments and the results obtained justify the effectiveness of the proposed procedure. The combined distance-texture (D-T) signature is found to perform convincingly better than the distance signature and texture signature individually. The effectiveness of the proposed technique based on combined D-T signature is substantiated by its extremely encouraging performance compared to other existing arts.
  • Granular neural networks: A study of optimizing allocation of information
           granularity in input space
    • Abstract: Publication date: Available online 18 January 2019Source: Applied Soft ComputingAuthor(s): Mingli Song, Yukai Jing, Witold Pedrycz In this paper, we develop a granular input space for neural networks, especially for multilayer perceptrons (MLPs). Unlike conventional neural networks, a neural network with granular input is an augmented study on a basis of a well learned numeric neural network. We explore an efficient way of forming granular input variables so that the corresponding granular outputs of the neural network achieve the highest values of the criteria of specificity (and support). When we augment neural networks through distributing information granularities across input variables, the output of a network has different levels of sensitivity on different input variables. Capturing the relationship between input variables and output result becomes of a great help for mining knowledge from the data. Andin this way, important features of the data can be easily found. As an essential design asset, information granules are considered in this construct. The quantification of information granules is viewed as levels of granularity which is given by the expert. The detailed optimization procedure of allocation of information granularity is realized by an improved partheno genetic algorithm (IPGA).The proposed algorithm is testified effective by some numeric studies completed for synthetic data and data coming from the machine learning and StatLib repositories. Moreover, the experimental studies offer a deep insight into the specificity of input features.
  • Enhanced goal attainment method for solving multi-objective
           security-constrained optimal power flow considering dynamic thermal rating
           of lines
    • Abstract: Publication date: Available online 16 January 2019Source: Applied Soft ComputingAuthor(s): Shima Rahmani, Nima Amjady Security-constrained optimal power flow (SCOPF) is an important problem in power system operation. Dynamic thermal rating (DTR), as an effective method to increase transmission capacity of power systems, has been recently considered in some optimal power flow (OPF) and SCOPF models. Additionally, in today power systems, OPF problem involves various objectives leading to multi-objective OPF models. In this paper, a new multi-objective SCOPF model considering DTR of transmission lines is presented. In addition, a new multi-objective solution method is proposed to solve the multi-objective SCOPF problem. The proposed method is an enhanced version of goal attainment technique in which the search capability of this technique to cover borders of the Pareto frontier is enhanced. The proposed multi-objective DTR-included SCOPF model as well as the proposed multi-objective solution method are tested on the IEEE 118-bus test system and the obtained results are compared with the results of other alternatives.
  • Multi-objective cellular particle swarm optimization for wellbore
           trajectory design
    • Abstract: Publication date: Available online 16 January 2019Source: Applied Soft ComputingAuthor(s): Jun Zheng, Chao Lu, Liang Gao Wellbore trajectory design is a determinant issue in drilling engineering. The criteria to evaluate a wellbore trajectory are summarized as the total trajectory length, the torque and the well profile energy in this paper. By minimizing the wellbore trajectory length, torque and profile energy simultaneously, it is most likely that a wellbore trajectory designed to arrive at the specific target can be drilled more safely, quickly and cheaply than other potential trajectories. However, these three objectives are often in conflict with each other and related in a highly nonlinear relationship. A multi-objective cellular particle swarm optimization (MOCPSO) with an adaptive neighborhood function is developed in this paper. Then, MOCPSO is applied with the three objective functions to gain a set of pareto optimal solutions that are beneficial for a less risky and less costly wellbore trajectory design option. Besides, MOCPSO’s performance is compared with multi-objective PSO, multi-objective evolutionary algorithm based on decomposition (MOEA/D) and non-dominated sorting genetic algorithm-II (NSGA-II). Effect of the proposed neighborhood function is also investigated by making contrasts with the commonly used four neighborhood templates. Moreover, the radius parameter in the adaptive neighborhood function is analyzed to reveal its influence on the optimization performance. It can be inferred that MOCPSO is statistically superior to both multi-objective PSO, NSGA-II and MOEA/D at the 0.05 level of significance on the wellbore trajectory design problem. And the proposed adaptive neighborhood function performs either comparable or better as compared to the other commonly used neighborhood functions. According to the parameter analysis, it can be concluded that the MOCPSO approach with radius value of 1or 1.5 has a better statistical performance.
  • Measuring and reaching consensus in group decision making with the
           linguistic computing model based on discrete fuzzy numbers
    • Abstract: Publication date: Available online 16 January 2019Source: Applied Soft ComputingAuthor(s): Xiao-yu Ma, Meng Zhao, Xiao Zou The linguistic computing model based on discrete fuzzy numbers has some good properties compared with other existing models and should be further studied, which has been proved by some researchers. However, the research of group consensus with this linguistic model is insufficient, given that group consensus is an important issue in group decision making. Therefore, this paper would concentrate on this subject. It includes two main issues: research on consensus measure and research on the method for improving group consensus in group decision making based on this linguistic computing model. For research on the consensus measure, this paper first studies on the aggregation method for discrete fuzzy numbers. Then, the index of measuring group consensus is determined. For research on improving the group consensus, considering the characteristics of discrete fuzzy numbers, we present an algorithm to improve group consensus. In addition, an illustrative example of a decision-making problem about investment is stated to show the whole solving process. It also illustrates the feasibility, rationality and validity of all the proposed methods. Finally, the comparisons between some proposals and existing studies are made, which helps point out the advantages of the proposed methods.
  • Data-driven soft sensor for animal cell suspension culture process based
           on DRVM
    • Abstract: Publication date: Available online 16 January 2019Source: Applied Soft ComputingAuthor(s): Yonghong Huang, Huan Zang, Xiaodong Cheng, Hongsheng Wu, Jueyou Li In order to solve the problem that key state variables (such as glucose concentration, lactic acid concentration and cell density) in the dynamic process of animal cell suspension culture are difficult to be measured in real time, a data-driven soft sensor based on dynamic relevance vector machine (DRVM) algorithm is proposed. The dominant variables of the soft sensor model are selected according to the mechanisms process. The corrcoef( ) function (belongs to the correlation coefficient command in MATLAB ) is used to analyze the correlation among environmental variables, and the auxiliary variables of the soft sensor model are further determined. An improved method on the three edge location algorithm is used to optimize the dynamic weights of the DRVM model. Considering the influence of dynamic transition on soft sensor, the maximum likelihood distribution method under the Bayesian framework is used to train DRVM weight and super parameters, and the dynamic soft sensor model of animal cell suspension culture is established. The proposed method is applied to predict the key state variables in BHK-21 cell suspension culture Process. The experimental results show that compared with the traditional static soft sensing based on RVM, the data-driven soft sensor based on DRVM has higher accuracy, and the rationality and superiority of the method are verified. In order to further realize the real-time online prediction of key state variables, the monitoring interface of the suspension culture process is developed on the LabVIEW virtual instrument platform through its MATLAB Script node, and the data exchange of the DRVM soft sensor for the key state variables of the cell suspension culture process based on MATLAB and monitoring interface is realized.
  • Enhanced global optimization methods applied to complex fisheries stock
           assessment models
    • Abstract: Publication date: Available online 16 January 2019Source: Applied Soft ComputingAuthor(s): David R. Penas, Andrés Gómez, Basilio B. Fraguela, María J. Martín, Santiago Cerviño Statistical fisheries models are frequently used by researchers and agencies to understand the behavior of marine ecosystems or to estimate the maximum acceptable catch of different species of commercial interest. The parameters of these models are usually adjusted through the use of optimization algorithms. Unfortunately, the choice of the best optimization method is far from trivial. This work proposes the use of population-based algorithms to improve the optimization process of the Globally applicable Area Disaggregated General Ecosystem Toolbox (Gadget), a flexible framework that allows the development of complex statistical marine ecosystem models. Specifically, parallel versions of the Differential Evolution (DE) and the Particle Swarm Optimization (PSO) methods are proposed. The proposals include an automatic selection of the internal parameters to reduce the complexity of their usage, and a restart mechanism to avoid local minima. The resulting optimization algorithms were called PMA (Parallel Multirestart Adaptive) DE and PMA PSO respectively. Experimental results prove that the new algorithms are faster and produce more accurate solutions than the other parallel optimization methods already included in Gadget. Although the new proposals have been evaluated on fisheries models, there is nothing specific to the tested models in them, and thus they can be also applied to other optimization problems. Moreover, the PMA scheme proposed can be seen as a template that can be easily applied to other population-based heuristics.
  • Pulse-coupled neural network and its optimization for segmentation of
           electrical faults with infrared thermography
    • Abstract: Publication date: Available online 11 January 2019Source: Applied Soft ComputingAuthor(s): Dongguo Zhou, Ming Chi This paper describes a method for segmenting electrical equipment fault with infrared thermography by using pulse-coupled neural networks. The pulse coupled neuron model used in PCNN is an optimization of the original neural model, in order to easily control and alter the behavior of neuron activity. In the receptive field, the image and its gradient information are regarded as the inputs of PCNN. The method for adjusting the value of linking coefficient is then derived from the principle of maximum likelihood estimate in the region of temporal pulse outputs, thus building the inner relationship between the parameters and image statistics. Besides, the edge constraint method, which is integrated into the pulse generator, is designed to alter the behavior of the neighboring neurons to be captured. It is shown that the addition of the constraint to the model increases the possibility of desired fault region segmentation. Finally, several experimental results, especially performed on the electrical equipment fault images, show that the proposed model has better performance than some existing PCNN-based models in the performance of image segmentation.
  • Self-boosting first-order autonomous learning neuro-fuzzy systems
    • Abstract: Publication date: Available online 9 January 2019Source: Applied Soft ComputingAuthor(s): Xiaowei Gu, Plamen Angelov In this paper, a detailed mathematical analysis of the optimality of the premise and consequent parts of the recently introduced first-order Autonomous Learning Multi-Model (ALMMo) neuro-fuzzy system is conducted. A novel self-boosting algorithm for structure- and parameter- optimization is, then, introduced to the ALMMo, which results in the self-boosting ALMMo (SBALMMo) neuro-fuzzy system. By minimizing the objective functions with the previously collected data, the SBALMMo is able to optimize its system structure and parameters in few iterations. Numerical examples based benchmark datasets and real-world problems demonstrate the effectiveness and validity of the SBALMMo, and show the strong potential of the proposed approach for real applications.
  • Risk assessment in control of fractional-order coronary artery system in
           the presence of external disturbance with different proposed controllers
    • Abstract: Publication date: Available online 20 December 2018Source: Applied Soft ComputingAuthor(s): Reza Behinfaraz, Sehraneh Ghaemi, Sohrab Khanmohammadi In this study, different controllers for control of fractional-order coronary artery system in the presence of external disturbance are designed. Using sliding mode, the proposed type 1 and type 2 fuzzy methods, the suitable controllers are proposed. In sliding mode control, a fractional sliding surface is presented and the control signal is modified to prevent chattering in the control system. With mathematical analysis, a type of membership function is suggested which has better performance in the fractional order system. Also, a rule-base is presented which leads to better results in type 1 and type 2 fuzzy controllers. The risk of the proposed controllers in different conditions is analyzed. Finally, according to the other analyzing methods, it is shown that this analyzing method has more accurate results.
  • Improving the Clustering Search heuristic: An application to cartographic
    • Abstract: Publication date: Available online 6 December 2018Source: Applied Soft ComputingAuthor(s): Eliseu J. Araújo, Antônio A. Chaves, Luiz A.N. Lorena The use of hybrid metaheuristics is a good approach to improve the quality and efficiency of metaheuristics. This paper presents a hybrid method based on Clustering Search (CS). CS seeks to combine metaheuristics and heuristics for local search, intensifying the search on regions of the search space which are considered promising. We propose a more efficient way to detect promising regions, based on the clustering techniques of Density-based spatial clustering of applications with noise (DBSCAN), Label-propagation (LP), and Natural Group Identification (NGI) algorithms. This proposal is called Density Clustering Search (DCS). To analyze this new approach, we propose to solve a combinatorial optimization problem with many practical applications, the Point Feature Cartographic Label Placement (PFCLP). The PFCLP attempts to locate identifiers (labels) of regions on a map without damaging legibility. The computational tests used instances taken from the literature. The results were satisfactory for clusters made with LP and NGI, presenting better results than the classic CS, which indicates these methods are a good alternative for the improvement of this method.
  • Efficient approximation approaches to minimal exposure path problem in
           probabilistic coverage model for wireless sensor networks
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Huynh Thi Thanh Binh, Nguyen Thi My Binh, Nguyen Hong Ngoc, Dinh Thi Ha Ly, Nguyen Duc Nghia A well-known method for evaluating the coverage quality of Wireless Sensor Networks (WSNs) is using exposure as a measure, especially in barrier coverage problems. Among all studies related to exposure, discussions regarding the Minimal Exposure Path (MEP) problem have dominated research in recent years. The problem aims to find a path on which an intruder can penetrate through the sensing field with the lowest probability of being detected. This path along with its exposure value enables network infrastructure designers to identify the worst-case coverage of the WSN and make necessary improvements. Most prior research worked on the MEP problem under the assumption that there are no environmental factors such as vibration, temperature, etc., which causes errors in practical WSN systems. To overcome this drawback, we first formulate the MEP problem based on Probabilistic Coverage Model with noise (hereinafter PM-based-MEP) and introduce a new definition of the exposure metric for this model. The PM-based-MEP is then converted into a numerically functional extreme with high dimension, non-differentially and non-linearity. Adapting to these characteristics, we propose two approximation methods, GB-MEP and GA-MEP, for solving the converted problem. GB-MEP is based on the traditional grid-based method which is fine-tuned by several tweaks, and GA-MEP is formed by the genetic algorithm with a featured individual representation and an effective combination of genetic operators. Experimental results on numerous instances indicate that the proposed algorithms are suitable for the converted PM-based-MEP problem and perform well regarding both solution accuracy and computation time compared with existing approaches.
  • A hierarchical simple particle swarm optimization with mean dimensional
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Hao-Ran Liu, Jing-Chuang Cui, Ze-Dan Lu, Da-Yan Liu, Yu-Jing Deng To reduce the negative influence of the overemphasis of gbest the dimensional information of particle is introduced to be a new example. This additional information source is incorporated into simple PSO to establish a simpler position model. Another two simpler position updating models, cognition only model and social only model, based on the simple PSO algorithm are presented as well. Time hierarchy strategy is extended from probability hierarchy, both aiming to make full use of advantages of three models. Three models are used with time or probability hierarchy to update each particle’s position. Thus, two proposed algorithms THSPSO and PHSPSO are finally obtained. Experiments are conducted on fifteen benchmark functions. The results demonstrate the two proposed algorithms both have excellent performances for basic functions compared with other popular PSO variants. Probability hierarchy strategy is more effective than time hierarchy strategy in general.
  • Speckle noise removal in SAR images using Multi-Objective PSO (MOPSO)
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): R. Sivaranjani, S. Mohamed Mansoor Roomi, M. Senthilarasi SAR images are inherently affected by speckle noise, and although attempts made earlier to remove speckle succeeded, there is still the challenge of preserving the edges of images. This is due to the smoothing effect of most of the earlier algorithms that work on thresholding coefficients in the transform domain. There exists a trade-off between denoising and the ability to preserve edges in selecting a suitable threshold. Estimation of an optimal threshold is a major concern and is compounded by the requirement for concurrent smoothing of noise and preservation of structural/edge information in an image. Considering the search for an optimal threshold as exhaustive and the requirements as contradictory, we model this as a Multi-Objective Particle Swarm Optimization (MOPSO) task and propose a MOPSO framework for despeckling an SAR image using a Dual-Tree Complex Wavelet Transform (DTCWT) in the frequency domain. Two counteractive reference metrics, such as Peak Signal-to-Noise Ratio (PSNR) and Mean Structural Similarity Index Metric (MSSIM), and non-reference metrics such as the alpha-beta (αβ) and Despeckling Evaluation Index (DEI) have been used as the objective functions of MOPSO. An optimal threshold derived from this multi-objective optimization is chosen for despeckling the SAR images. The proposed solution has been found to outperform state-of-the-art filters such as Lee, Kaun, Frost and SAR-BM3D filters. Also, the proposed MOPSO framework superior than the competing optimization technique Multi-Objective Evolutionary Algorithm (MOEA) based on Differential Evolution (DE) framework for despeckling.
  • A new effective and powerful medical image segmentation algorithm based on
           optimum path snakes
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Pedro P. Rebouças Filho, Antonio C. da Silva Barros, Jefferson S. Almeida, J.P.C. Rodrigues, Victor Hugo C. de Albuquerque Novel segmentation methods based on models of deformable active contours are constantly proposed and validated in different fields of knowledge, with the aim to make the detection of the regions of interest standard. This paper propose a new method called Optimum Path Snakes (OPS), a new adaptive algorithm and free of parameters to define the total energy of a active contour model with automatic initialization and stop criteria. In the experimental assessment, the OPS is compared against some approaches commonly used in the following fields, such as vector field convolution, gradient vector flow, and other specialists methods for lung segmentation using thorax computed tomography images. The segmentation of regions with stroke was carried out with methods based on region growing, watershed and a specialist level set approach. Statistical validations metrics using Dice coefficient (DC) and Hausdorff distance (HD) were also evaluated, as well as the processing time. The results showed that the OPS is a promising tool for image segmentation, presenting satisfactory results for DC and HD, and, many times, superior to the other algorithms it was compared with, including those generated by specialists. Another advantage of the OPS is that it is not restricted to specific types of images, neither applications.
  • An artificial bee colony-based hybrid approach for waste collection
           problem with midway disposal pattern
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Qu Wei, Zhaoxia Guo, Hoong Chuin Lau, Zhenggang He This paper investigates a waste collection problem with the consideration of midway disposal pattern. An artificial bee colony (ABC)-based hybrid approach is developed to handle this problem, in which the hybrid ABC algorithm is proposed to generate the better optimum-seeking performance while a heuristic procedure is proposed to select the disposal trip dynamically and calculate the carbon emissions in waste collection process. The effectiveness of the proposed approach is validated by numerical experiments. Experimental results show that the proposed hybrid approach can solve the investigated problem effectively. The proposed hybrid ABC algorithm exhibits a better optimum-seeking performance than four popular metaheuristics, namely a genetic algorithm, a particle swarm optimization algorithm, an enhanced ABC algorithm and a hybrid particle swarm optimization algorithm. It is also found that the midway disposal pattern should be used in practice because it reduces the carbon emission at most 7.16% for the investigated instances.
  • A new platform for the prediction of field-dependent yield stress and
           plastic viscosity of magnetorheological fluids using particle swarm
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Irfan Bahiuddin, Saiful Amri Mazlan, Mohd. Ibrahim Shapiai, Fitrian Imaduddin, Ubaidillah, Seung-Bok Choi The yield stress and plastic viscosity of magnetorheological (MR) fluids are identified by fitting rheological models based on a selected dataset on a certain range of shear rates. However, the datasets are often arbitrarily determined as there is no standardized procedure available. To overcome this problem, a platform that capable to minimize the fitting error while considering the classification of the shear rate regions is needed. Therefore, this work proposed a new platform for the systematic prediction of field-dependent rheological characteristics using particle swarm optimization (PSO). PSO is a meta-heuristic algorithm for solving optimization problems based on a guided search of the defined problem space, which is governed by the objective function. An intersection point of low and high shear rate regions critical shear rate is formulated as part of the objective function to standardize the characterization within the defined regions. The objective function is inspired by the modified Bingham biplastic and Papanastasiou models to predict five magnetic field dependent-rheological parameters. In the development stage, the shear stress model was first established using a previously developed extreme learning machine method. Then, the codes of the PSO, objective functions and search space identification were developed and implemented. To validate the effectiveness of the proposed procedure, the platform performance was analysed at different algorithmic parameters and compared with the existing optimization methods. The simulation results indicated that the proposed platform performed better than the existing ones with R2 of 0.943 and was able to systematically and accurately predict the rheological parameters.
  • A novel approximate method of computing extended Nash equilibria
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Przemysław Juszczuk In this paper, we propose a new method of computing an approximate Nash equilibrium with additional features. Existing algorithms often fail to produce an exact solution for games involving more than 3 players. Similarly, existing algorithms do not permit additional constraints on the problem. The principle idea of this paper involves proposing a methodology for computing approximate solutions through evolutionary computation. To do so, we first provide formal definitions of these problems and their approximate versions. Following which, we present the details of our solution. One of the most important advantages of the proposed solution is flexibility, which provides solutions to problems related to Nash equilibrium extensions. The proposed idea is tested on several types of games that vary with difficulty and size. All test sets are generated based on the well-known Gamut program. Additional comparisons with classical algorithms are also performed. Results indicate that Differential Evolution is capable of obtaining satisfactory solutions to large random and covariant games. The results also demonstrate that there is a high probability that even large games, in which a set of strategies with a non-zero probability of being chosen are very small, have a solution. The computation time depends mainly on the problem size, and the original Nash equilibrium problem is unaffected by additional modifications.Graphical abstractGraphical abstract for this article
  • Parallel island based Memetic Algorithm with Lin–Kernighan local search
           for a real-life Two-Echelon Heterogeneous Vehicle Routing Problem based on
           Brazilian wholesale companies
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Andre Bevilaqua, Diego Bevilaqua, Keiji Yamanaka This paper deals with a Two-Echelon Fixed Fleet Heterogeneous Vehicle Routing Problem (2E-HVRP) faced by Brazilian wholesale companies. Vehicle routing problems with more than one phase are known as Multi-Echelon VRP and consider situations in which freight is moved through some intermediate facilities (e.g., cross-docks or distribution centers) before reaching its destination. The first phase of the problem dealt here is to choose a first-level vehicle, from an heterogeneous set, that will leave a depot and reach an intermediate uncapacitated facility (satellite) to serve a set of second-level vehicles. After that, it is necessary to define routes for smaller vehicles, also from an heterogeneous set, that will visit a set of customers departing from and returning to a satellite. The solution proposed here is an efficient island based memetic algorithm with a local search procedure based on Lin–Kernighan heuristic (IBMA-LK). In order to attest the algorithm’s efficiency, first it was tested in single echelon HVRP benchmark instances. After that the instances were adapted for two-echelon context and used for 2E-HVRP validation and, finally, it was tested on 2E-HVRP instances created using real world normalized data. Localsolver tool was also executed for comparison purposes. Promising results (which corroborate results obtained on the real problem) and future works are presented and discussed.
  • A novel method for structure selection of the Recurrent Random Neural
           Network using multiobjective optimisation
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Erivelton G. Nepomuceno The Random Neural Network (RNN) has extensively investigated over the past few decades; this research has resulted in a considerable number of theoretical and application papers. Although, great effort has been done to develop a systematic procedure to train the recurrent fashion of the RNN, the choice of the number of neurons remains an open question. To overcome this problem, at least partially, this paper uses multiobjective optimisation (MOP) to select the number of neurons. The MOP framework used the mean square error (MSE) and the number of neurons (N) as the objectives to be minimised. The stochastic nondominated algorithm (SNA) to exclude dominated solutions of the Pareto-set has been also introduced. Instead of using only the best solution, candidates to the Pareto-set are excluded by statistical comparison among mean values of the two objectives in all training runs. The SNA allows a statistically correct exclusion of dominated solutions; the best solution can be picked up using classical decision-making procedures. Numerical and real examples illustrate the potentiality of the proposed method in two areas: classification problems and system identification.
  • An integer encoding grey wolf optimizer for virtual network function
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Huanlai Xing, Xinyu Zhou, Xinhan Wang, Shouxi Luo, Penglin Dai, Ke Li, Hui Yang This paper studies the virtual network function placement (VNF-P) problem in the context of network function virtualization (NFV), where the end-to-end delay of a requested service function chain (SFC) is minimized and the compute, storage, I/O and bandwidth resources are considered. To address this problem, an integer encoding grey wolf optimizer (IEGWO) is proposed. IEGWO has two significant features, namely an integer encoding scheme and a new wolf position update mechanism. The integer encoding scheme is problem-specific and offers a natural way to represent VNF-P solutions. The proposed wolf position update mechanism divides the wolf pack into two groups in each iteration, where one group performs exploitation while the other focuses on global exploration. It provides the search with a balanced local exploitation and global exploration during evolution. Performance evaluation has been conducted based on 20 test instances and IEGWO is compared with five state-of-the-art meta-heuristics, including the black hole algorithm (BH), the genetic algorithm (GA), the group counseling optimization (GCO), the particle swarm optimization (PSO) and the teaching–learning-based optimization (TLBO). Simulation results demonstrate that compared with BH, GA, GCO, PSO and TLBO, IEGWO achieves significantly better solution quality regarding the mean (standard deviation), boxplot and t-test results of the best fitness values obtained.Graphical abstractGraphical abstract for this article
  • A distributed charging strategy based on day ahead price model for
           PV-powered electric vehicle charging station
    • Abstract: Publication date: Available online 11 January 2019Source: Applied Soft ComputingAuthor(s): Tao Rui, Cungang Hu, Guoli Li, Jisheng Tao, Weixiang Shen This paper studies a distributed charging model based on day-ahead optimal internal price for PV-powered Electric Vehicle (EV) Charging Station (PVCS). Considering the feed-in-tariff of PV energy, the price of utility grid and the forecast model of PV based on back-propagation neural network (BPNN), a system operation model of PVCS is introduced, which consists of the profit model of PVCS operator (PO) and the cost model of EV users. The model proposed in this paper can be designed as a Stackelberg game model, where the PO acts as the leader and all EV users participated are regarded as the followers. An optimization strategy based on heuristic algorithm and nonlinear constrained programing are adopted by the PO and each EV user, respectively. Moreover, a real-time billing strategy is proposed to deal with the errors from the forecasted PV energy and the expected charging arrangements. Finally, through a practical case, the validity of the model is verified in terms of increasing operation profit and reducing charging cost.
  • Combining binary classifiers in different dichotomy spaces for text
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Roberto H.W. Pinheiro, George D.C. Cavalcanti, Ing Ren Tsang Several supervised machine learning applications are commonly represented as multi-class problems, but it is harder to distinguish several classes rather than just two classes. In contrast to the approaches one-against-all and all-pairs that transform a multi-class problem into a set of binary problems, Dichotomy Transformation (DT) converts a multi-class problem into a different problem where the goal is to verify if a pair of documents belongs to the same class or not. To perform this task, DT generates a dichotomy set obtained by combining a pair of documents, each belongs to either a positive class (documents in the pair that have the same class) or a negative class (documents in the pair that come from different classes). The definition of this dichotomy set plays an important role in the overall accuracy of the system. So, an alternative to avoid searching for the best dichotomy set is using multiple classifier systems because we can have many different sets where each one is used to train one binary classifier instead of having only one dichotomy set. Herein we propose Combined Dichotomy Transformations (CoDiT), a Text Categorization system that combines binary classifiers that are trained with different dichotomy sets using DT. By using DT, the number of training examples increases exponentially when compared with the original training set. This is a desirable property because each classifier can be trained with different data without reducing the number of examples or features. Therefore, it is possible to compose an ensemble with diverse and strong classifiers. Experiments using 14 databases show that CoDiT achieves statistically better results in comparison to SVM, Bagging, Random Subspace, BoosTexter, and Random Forest.Graphical abstractGraphical abstract for this article
  • On the difference in control performance of interval type-2 fuzzy PI
           control system with different FOU shapes
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Chun Ming Tommy Yip, Woei Wan Tan, Maowen Nie Interval type-2 fuzzy logic controllers (IT2-FLCs) have been attracting a lot of attention. However, challenges in designing IT2-FLCs still remain. One of the main challenges is to choose the appropriate FOU shape for interval type-2 fuzzy sets (IT2-FSs). This paper aims to analyse the differences in control performance between three IT2 fuzzy PI controllers (IT2-F-PICs) with different FOU shapes as antecedent sets, namely the triangular top wide IT2 fuzzy set, the triangular bottom wide IT2 fuzzy set and the trapezoidal (also called parallel) IT2 fuzzy set. First, the analytical structures of these IT2-FLCs are derived and the mathematical input–output equations are obtained. Three interesting differences between the analytical structures and input–output relationship of the IT2-F-PICs are then presented. From the differences in the analytical structures of the three IT2-F-PICs and numerical simulation results, it is demonstrated that IT2-F-PICs with trapezoidal (IT2-F-PI-P) and triangular bottom wide (IT2-F-PI-BW) antecedent sets with the potential to provide faster transient response and faster settling time than the IT2-F-PICs with triangular top wide (IT2-F-PI-TW). In addition, IT2-F-PI-P is better able to handle plant uncertainties and disturbances than IT2-F-PI-BW and IT2-F-PI-TW. The contribution of this paper is to provide insights into the performance differences between different FOU shaped controllers, which in turns allowing control designers to select the appropriate FOU shape in order to meet design requirements.
  • A survey on projection neural networks and their applications
    • Abstract: Publication date: March 2019Source: Applied Soft Computing, Volume 76Author(s): Long Jin, Shuai Li, Bin Hu, Mei Liu Constrained optimization problems arise in numerous scientific and engineering applications, and many papers on the online solution of constrained optimization problems using projection neural networks have been published in the literature. The purpose of this paper is to provide a comprehensive review of the research on projection neural networks for solving various constrained optimizations as well as their applications. Since convergence and stability are important for projection neural networks, theoretical results of projection neural networks are reviewed in detail. In addition, various applications of projection neural networks, e.g., the motion generation of redundant robot manipulators, coordination control of multiple robots with limited communications, generation of winner-take-all strategy, model predictive control and WSN localizations, are discussed and compared. Concluding remarks and future directions of projection neural networks as well as their applications are provided.
  • Artificial bee colony optimization for the quadratic assignment problem
    • Abstract: Publication date: Available online 8 January 2019Source: Applied Soft ComputingAuthor(s): Tansel Dokeroglu, Ender Sevinc, Ahmet Cosar We propose hybrid Artificial Bee Colony (ABC) optimization algorithms for the well-known Quadratic Assignment Problem (QAP). Large problem instances of the QAP are still very challenging. Scientists have not discovered any method to obtain the exact solutions for these difficult problems yet. The ABC has been reported to be an efficient meta-heuristic for the solution of many intractable problems. It has promising results making it a good candidate to obtain (near)-optimal solutions for well-known NP-Hard problems. The proposed ABC algorithm (ABC-QAP) and its parallel version (PABC-QAP) are the first applications of the ABC meta-heuristic together with Tabu search to the optimization of the QAP. The behavior of employed, onlooker and scout bees are modeled by using the distributed memory parallel computation paradigm for large problem instances of the QAP. Scout bees search for food sources, employed bees go to food source and return to hive and share their information on the dance area, onlooker bees watch the dance of employed bees and choose food sources depending on the dance. Robust Tabu search method is used to simulate exploration and exploitation processes of the bees. 125 of 134 benchmark problem instances are solved optimally from the QAPLIB library and 0.27% deviation is reported for 9 large problem instances that could not be solved optimally. The performance of the ABC optimization algorithms is competitive with state-of-the-art meta-heuristic algorithms in literature.
  • An efficient Differential Evolution algorithm for stochastic OPF based
           active-reactive power dispatch problem considering renewable generators
    • Abstract: Publication date: Available online 4 January 2019Source: Applied Soft ComputingAuthor(s): Noor H. Awad, Mostafa Z. Ali, Rammohan Mallipeddi, Ponnuthurai N. Suganthan Optimal active-reactive power dispatch problems (OARPD) are non-convex and highly nonlinear complex optimization problems. Typically, such problems are expensive in terms of computational time and cost due to the load variations over the scheduling period. The conventional constraint-based solvers that are generally used to tackle such problems require a considerable high budget and may not provide high quality solutions. In the last decade, complexity of OARPD has further increased due to the incorporation of renewable energy sources such as: wind, solar and small-hydro generators. More specifically, the incorporation of renewable sources introduces uncertainty in generation on top of the load variations in conventional OARPD, making the problem more complicated. Recently, Differential Evolution (DE) is viewed as an excellent algorithm to solve OARPD problems, due to its effectiveness to optimize the objective function which is subject to many operational constraints. A new efficient Differential Evolution algorithm, denoted as DEa-AR, is propounded to solve the contemporary stochastic optimal power flow OARPD problems considering the renewable generators. DEa-AR uses arithmetic recombination crossover and adapts the scaling factor based on Laplace distribution. In addition, an efficient archive strategy that acts as a corresponding image of the population and stores the inferior individuals for later use, is also incorporated. The target behind using this strategy is to consider the information of inferior individuals as a direction toward finding new good solutions. The IEEE 57-bus system is used to evaluate the OARPD problems with different stochastic scenarios based on different probability distributions employed to model parameters of renewable energy sources. The performance of the proposed work is compared with other state-of-the-art algorithms. Simulation results indicate that the proposed technique can solve the OARPD problems with renewable sources effectively and can provide high quality solutions. The proposed algorithm is ranked the first with a Friedman rank equals to 1.8333 with a clear statistical significant difference compared with the most recent studies on the used problems.
  • An intelligent coordinator design for GCSC and AGC in a two-area hybrid
           power system
    • Abstract: Publication date: Available online 4 January 2019Source: Applied Soft ComputingAuthor(s): Rahmat Khezri, Arman Oshnoei, Soroush Oshnoei, Hassan Bevrani, SM Muyeen This study addresses the design procedure of an optimized fuzzy fine-tuning (OFFT) approach as an intelligent coordinator for gate controlled series capacitors (GCSC) and automatic generation control (AGC) in hybrid multi-area power system. To do so, a detailed mathematical formulation for the participation of GCSC in tie-line power flow exchange is presented. The proposed OFFT approach is intended for valid adjustment of proportional-integral controller gains in GCSC structure and integral gain of secondary control loop in the AGC structure. Unlike the conventional classic controllers with constant gains that are generally designed for fixed operating conditions, the outlined approach demonstrates robust performance in load disturbances with adapting the gains of classic controllers. The parameters are adjusted in an online manner via the fuzzy logic method in which the sine cosine algorithm subjoined to optimize the fuzzy logic. To prove the scalability of the proposed approach, the design has also been implemented on a hybrid interconnected two-area power system with nonlinearity effect of governor dead band and generation rate constraint. Success of the proposed OFFT approach is established in three scenarios by comparing the dynamic performance of concerned power system with several optimization algorithms including artificial bee colony algorithm, genetic algorithm, improved particle swarm optimization algorithm, ant colony optimization algorithm and sine cosine algorithm.
  • Iterative generation of insight from text collections through mutually
           reinforcing visualizations and fuzzy cognitive maps
    • Abstract: Publication date: Available online 3 January 2019Source: Applied Soft ComputingAuthor(s): Venkata Sai Pillutla, Philippe J. Giabbanelli Developing a comprehensive explanation of complex social phenomena is a difficult task that analysts often have to perform using vast collections of text documents. On the one hand, solutions exist to assist analysts in creating causal maps from text documents, but these can only articulate the relationships at work in a problem. On the other hand, Fuzzy Cognitive Maps (FCMs) can articulate these relationships and perform simulations, but no environment exists to help analysts in iteratively developing FCMs from text. In this paper, we detail the design and implementation of the first tool that allows analysts to develop FCMs from text collections, using interactive visualizations. We make three contributions: (i) we combine text mining and FCMs, (ii) we implement the first visual analytics environment built on FCMs, and (iii) we promote a strong feedback loop between interactive data exploration and model building. We provide two case studies exemplifying how to create a model from the ground-up or improve an existing one. Limitations include the increase in display complexity when working with large collection of files, and the reliance on KL-divergence for ad-hoc retrieval. Several improvements are discussed to further support analysts in creating high-quality models through interactive visualizations.
  • Decomposition-based evolutionary dynamic multiobjective optimization using
           a difference model
    • Abstract: Publication date: Available online 2 January 2019Source: Applied Soft ComputingAuthor(s): Leilei Cao, Lihong Xu, Erik D. Goodman, Hui Li This paper presents a novel prediction model combined with a multiobjective evolutionary algorithm based on decomposition to solve dynamic multiobjective optimization problems. In our model, the motion of approximated Pareto-optimal solutions (POS) over time is represented by the motion of the centroid, and the other solutions are assumed to have the same motion as the centroid. A history of recent centroid locations is used to build a difference model to estimate the later motion of the centroid when an environmental change is detected, and then the new locations of the other solutions are predicted based on their current locations and the estimated motion. The predicted solutions, combined with some retained solutions, form a new population to explore the new environment, and are expected to track the new POS and/or Pareto-optimal front relatively well. The proposed algorithm is compared with four state-of-the-art dynamic multiobjective evolutionary algorithms through 20 benchmark problems with differing dynamic characteristics. The experimental studies show that the proposed algorithm is effective in dealing with dynamic problems and clearly outperforms the competitors.
  • An alternative SMOTE oversampling strategy for high-dimensional datasets
    • Abstract: Publication date: Available online 27 December 2018Source: Applied Soft ComputingAuthor(s): Sebastián Maldonado, Julio López, Carla Vairetti In this work, the Synthetic Minority Over-sampling Technique (SMOTE) approach is adapted for high-dimensional binary settings. A novel distance metric is proposed for the computation of the neighborhood for each minority sample, which takes into account only a subset of the available attributes that are relevant for the task. Three variants for the distance metric are explored: Euclidean, Manhattan, and Chebyshev distances, and four different ranking strategies: Fisher Score, Mutual Information, Eigenvector Centrality, and Correlation Score. Our proposal was compared with various oversampling techniques on low- and high-dimensional datasets with the presence of class-imbalance, including a case study on Natural Language Processing (NLP). The proposed oversampling strategy showed superior results on average when compared with SMOTE and other variants, demonstrating the importance of selecting the right attributes when defining the neighborhood in SMOTE-based oversampling methods.
  • Outlier detection based on Gaussian process with application to industrial
    • Abstract: Publication date: Available online 27 December 2018Source: Applied Soft ComputingAuthor(s): Biao Wang, Zhizhong Mao Due to the extensive usage of data-based techniques in industrial processes, detecting outliers for industrial process data become increasingly indispensable. This paper proposes an outlier detection scheme that can be directly used for either process monitoring or process control. Based on traditional Gaussian process regression, we develop several detection algorithms, of which the mean function, covariance function, likelihood function and inference method are specially devised. Compared with traditional detection methods, the proposed scheme has less postulation and is more suitable for modern industrial processes. The effectiveness of the proposed scheme is verified by experiments on both synthetic and real-life data sets.
  • Two effective metaheuristic algorithms for solving a stochastic
           optimization model of a multi-echelon supply chain
    • Abstract: Publication date: Available online 27 December 2018Source: Applied Soft ComputingAuthor(s): Sasan Khalifehzadeh, M.B. Fakhrzad, Yahia Zare Mehrjerdi, Hasan Hosseini_Nasab Production distribution network (PDN) planning problems in multi echelon status are commonly complex with dynamic relationships that cause several uncertainties in different parameters of the network. In this paper, we formulate a multi echelon PDN to deliver products to customers with uncertain demand in the least time with uncertain delivery lead time. The proposed network is including multi supplier, producer, potential depot, retailer and inland and outland customer in multi time period horizon. A stochastic multi objective model with maximizing total profit of the system and minimizing total delivery lead time is designed. We apply chance constraints approach to cover the uncertainty of the model and introduce two heuristic methods named selective firefly algorithm (SFA) and ranking genetic algorithm (RGA) in order to solve several sized especially real world instances. Finally, the performance of two proposed algorithms is examined with solving several sized instances. The results indicate average improvement 6.12% and 8.93% with applying SFA and RGA, respectively.Graphical abstractGraphical abstract for this article
  • Resource scheduling algorithm with load balancing for cloud service
    • Abstract: Publication date: Available online 27 December 2018Source: Applied Soft ComputingAuthor(s): V. Priya, C. Sathiya Kumar, Ramani Kannan Cloud computing uses scheduling and load balancing for virtualized file sharing in cloud infrastructure. These two have to be performed in an optimized manner in cloud computing environment to achieve optimal file sharing. Recently, Scalable traffic management has been developed in cloud data centers for traffic load balancing and quality of service provisioning. However, latency reducing during multidimensional resource allocation still remains a challenge. Hence, there necessitates efficient resource scheduling for ensuring load optimization in cloud. The objective of this work is to introduce an integrated resource scheduling and load balancing algorithm for efficient cloud service provisioning. The method constructs a Fuzzy-based Multidimensional Resource Scheduling model to obtain resource scheduling efficiency in cloud infrastructure. Increasing utilization of Virtual Machines through effective and fair load balancing is then achieved by dynamically selecting a request from a class using Multidimensional Queuing Load Optimization algorithm. A load balancing algorithm is then implemented to avoid underutilization and overutilization of resources, improving latency time for each class of request. Simulations were conducted to evaluate the effectiveness using Cloudsim simulator in cloud data centers and results shows that the proposed method achieves better performance in terms of average success rate, resource scheduling efficiency and response time. Simulation analysis shows that the method improves the resource scheduling efficiency by 7% and also reduces the response time by 35.5 % when compared to the state-of-the-art works.Graphical abstractGraphical abstract for this article
  • Parallel memetic algorithm for training recurrent neural networks for the
           energy efficiency problem
    • Abstract: Publication date: Available online 26 December 2018Source: Applied Soft ComputingAuthor(s): L.G.B. Ruiz, M.I. Capel, M.C. Pegalajar In our state-of-the-art study, we improve neural network-based models for predicting energy consumption in buildings by parallelizing the CHC adaptive search algorithm. We compared the sequential implementation of the evolutionary algorithm with the new parallel version to obtain predictors and found that this new version of our software tool halved the execution time of the sequential version. New predictors based on various classes of neural networks have been developed and the obtained results support the validity of the proposed approaches with an average improvement of 75% of the average execution time in relation to previous sequential implementations.
  • Locally convex-regions approximation using an incremental quadratic-based
           fuzzy clustering
    • Abstract: Publication date: Available online 26 December 2018Source: Applied Soft ComputingAuthor(s): Mostafa Kalhor, Mehdi Rahmani, Ahmad Kalhor Choosing an appropriate local optimal region in order to satisfy the location priorities and to guarantee enough robustness against measurement biases is desired in many optimization problems. To fulfill such aim, all locally convex regions which potentially contain optimal points must be approximated. In this paper, using a quadratic-based fuzzy clustering, approximation of locally convex regions of Multiple-Convex Functions (MCFs) is intended. At first, using an incremental fuzzy clustering approach, the input space is partitioned as hyper-rectangle regions in which Locally Quadratic Models (LQMs) are identified. Based on the Hessian matrices of LQMs, some clusters, that potentially contain convex regions, are chosen. Around a certain patch of each chosen cluster, a high-order model is fitted, through which a Gradient-based Ordinal Differential Equation (GODE) is defined. Estimating the domain of attraction of each defined GODE, a locally convex region is approximated. Then, robustness of the approximated convex regions, against unknown bounded biases of input variables, is discussed. A theorem is stated which conservatively determines the sub-regions remaining convex even in presence of the uncertainty. To explain the methodology of the proposed method, an illustrative example is given. Then, the suggested method is applied to the power economic dispatch (PED) problem. The achieved results demonstrate the capability of the proposed method.
  • A new approach to deal with consistency and consensus issues for hesitant
           fuzzy linguistic preference relations
    • Abstract: Publication date: Available online 26 December 2018Source: Applied Soft ComputingAuthor(s): Nana Liu, Yue He, Zeshui Xu Hesitant fuzzy linguistic preference relations (HFLPRs) as an efficient and common tool to deal with decision-making problems have been widely used in real life. The consistency and consensus are the most two important topics for HFLPRs. In this paper, we develop a new efficient consistency-consensus framework for HFLPRs. Firstly, we transform HFLPRs into probabilistic linguistic preference relations, and develop a programming model to make the HFLPRs achieve the maximum consistency degree. Then, we propose some new rules for addition operation and weighted average operator to fuse individual preference information. After that, a maximum consensus model is developed which maximizes the consensus degree by adjusting the experts’ weights. Then, a numerical case of temporary placement selection after earthquake is presented, and some comparisons with the traditional methods are conducted to verify the rationality and efficiency of our framework. Finally, we end the paper with some conclusions and future research directions.
  • Ensemble mating selection in evolutionary many-objective search
    • Abstract: Publication date: Available online 21 December 2018Source: Applied Soft ComputingAuthor(s): Yu-Hui Zhang, Yue-Jiao Gong, Tian-Long Gu, Jun Zhang Traditional multi-objective evolutionary algorithms have encountered difficulties when handling many-objective problems. This is due to the loss of selection pressure incurred by the growing size of objective space. A variety of environmental selection operators have been proposed to address the issue, each has its distinct benefits and drawbacks. We develop a novel ensemble framework to enhance the effectiveness and robustness of many-objective optimization. The framework incorporates multiple environmental selection operators to guide the search, which are then viewed as voters to construct a mating pool. We design an ensemble mating selection strategy that makes decisions based on the preference information provided by the voters: individuals elected by more voters will be assigned larger possibilities to enter the mating pool. By doing so, high quality offspring can be reproduced from the elected promising candidates. To accommodate the multiple selection operators for voting, the framework maintains multiple parallel populations, where each population is updated by one of the selection operators. An instantiation of the framework with three popular operators is presented as a prime example. Extensive experiments have been conducted on a number of many-objective problems to examine the effectiveness of the proposed approach. Experimental results show that the mating selection strategy is capable of improving the quality of the obtained solution set.
  • Online identification of a rotary wing Unmanned Aerial Vehicle from data
    • Abstract: Publication date: Available online 21 December 2018Source: Applied Soft ComputingAuthor(s): Md. Meftahul Ferdaus, Mahardhika Pratama, Sreenatha G. Anavatti, Matthew A. Garratt Until now the majority of the neuro and fuzzy modeling and control approaches for rotary wing Unmanned Aerial Vehicles (UAVs), such as the quadrotor, have been based on batch learning techniques, therefore static in structure, and cannot adapt to rapidly changing environments. Implication of Evolving Intelligent System (EIS) based model-free data-driven techniques in fuzzy system are good alternatives, since they are able to evolve both their structure and parameters to cope with sudden changes in behavior, and performs perfectly in a single pass learning mode which is suitable for online real-time deployment. The Metacognitive Scaffolding Learning Machine (McSLM) is seen as a generalized version of EIS since the metacognitive concept enables the what-to-learn, how-to-learn, and when-to-learn scheme, and the scaffolding theory realizes a plug-and-play property which strengthens the online working principle of EISs. This paper proposes a novel online identification scheme, applied to a quadrotor using real-time experimental flight data streams based on McSLM, namely Metacognitive Scaffolding Interval Type 2 Recurrent Fuzzy Neural Network (McSIT2RFNN). Our proposed approach demonstrated significant improvements in both accuracy and complexity against some renowned existing variants of the McSLMs and EISs.
  • Flexible time horizon project portfolio optimization with consumption and
           risk control
    • Abstract: Publication date: Available online 21 December 2018Source: Applied Soft ComputingAuthor(s): Yong-Jun Liu, Wei-Guo Zhang Most of the existing models for project portfolio selection are proposed on the framework of fixed investment horizon. One recent and promising strategy for project portfolio selection is flexible time horizon investment that can enable investors more chances to increase their gains. The aim of this paper is to discuss the optimal investment and consumption decision for project portfolio selection with flexible time horizon, in which wealth is divided among a riskless asset and multiple projects. Two flexible time horizon project portfolio selection models are proposed. In the two proposed models, some realistic decision criteria are taken into consideration including risk control over bankruptcy to avoid a possibility of bankruptcy at each period, project start time constraint and reinvestment strategy. Then, a novel genetic algorithm is designed to solve the proposed models. Finally, two numerical examples are given to demonstrate the application of the two proposed models and illustrate the effectiveness of the designed solution algorithm.
  • A machine learning approach to assess price sensitivity with application
           to automobile loan segmentation
    • Abstract: Publication date: Available online 21 December 2018Source: Applied Soft ComputingAuthor(s): Jorge M. Arevalillo Price sensitivity is an outstanding business issue in companies and organizations that aim to undertake optimal managerial decisions for increasing sales and / or revenue. Hence, price sensitivity assessment has become an in fashion problem that has attracted the attention of a wide variety of actors and business units within the organizations. In this paper we propose a machine learning approach to assess price sensitivity for an automobile loan portfolio in order to get a segmentation revealing the existence of groups with differential price sensitivity, defined by their differential purchase responses against changes in the loan interest rate. The proposed method combines the power of conditional inference trees, random forests and model based recursive partitioning algorithms to implement a process for price group finding, variable selection and price sensitivity segmentation in order uncover such differential groups and characterize them by asset and product characteristics and by customer attributes as well. The resulting segmentation will define high sensitivity groups, where interest rate reductions can be recommended in order to increase sales, as well as nearly insensitive groups for which a price strategy that increases the interest rate is expected to have slight impact on loan disbursements.
  • Towards a unified multi-source-based optimization framework for
           multi-label learning
    • Abstract: Publication date: Available online 20 December 2018Source: Applied Soft ComputingAuthor(s): Jia Zhang, Candong Li, Zhenqiang Sun, Zhiming Luo, Changen Zhou, Shaozi Li In the era of Big Data, a practical yet challenging task is to make learning techniques more universally applicable in dealing with the complex learning problem, such as multi-source multi-label learning. While some of the early work have developed many effective solutions for multi-label classification and multi-source fusion separately, in this paper we learn the two problems together, and propose a novel method for the joint learning of multiple class labels and data sources, in which an optimization framework is constructed to formulate the learning problem, and the result of multi-label classification is induced by the weighted combination of the decisions from multiple sources. The proposed method is responsive in exploiting the label correlations and fusing multi-source data, especially in the fusion of long-tail data. Experiments on various multi-source multi-label data sets reveal the advantages of the proposed method.Graphical abstractGraphical abstract for this article
  • A semi-quantitative modelling application for assessing energy efficiency
    • Abstract: Publication date: Available online 18 December 2018Source: Applied Soft ComputingAuthor(s): Alexandros Nikas, Emmanouil Ntanos, Haris Doukas Given the international efforts in tackling climate change as well as the potential dependence on conventional energy imports and the adverse economic environment, countries in the European Union face significant challenges in the critical task of enhancing energy efficiency. Approaches exclusively oriented on detailed quantitative modelling tools like energy system and climate-economy models, however, tend to exclude certain policy instruments and risks, and be too formalised or complex for policymakers to participate, understand and trust. Several decision support frameworks have been proposed for bridging the policy-model gap and helping policymakers confidently select among a number of alternative strategies. This paper employs the expert-driven method of fuzzy cognitive mapping, a semi-quantitative modelling technique in which system dynamics are captured and simulated against different scenarios. To this end, an innovative decision support tool for building and simulating complex fuzzy cognitive maps for assessing policy strategies with the help of experts, ESQAPE, is introduced and presented. An application in Greece shows that long-term energy efficiency measures focusing mainly on behavioural change in the residential sector—as opposed to services in the private and public sectors—are perceived to be more sustainable in a socio-economically optimistic future; this is not the case when challenges across the mitigation and adaptation axes are expected to be higher.
  • Bandit-based cooperative coevolution for tackling contribution imbalance
           in large-scale optimization problems
    • Abstract: Publication date: Available online 18 December 2018Source: Applied Soft ComputingAuthor(s): Borhan Kazimipour, Mohammad Nabi Omidvar, A.K. Qin, Xiaodong Li, Xin Yao This paper addresses the issue of computational resource allocation within the context of cooperative coevolution. Cooperative coevolution typically works by breaking a problem down into smaller subproblems (or components) and coevolving them in a round-robin fashion, resulting in a uniform resource allocation among its components. Despite its success on a wide range of problems, cooperative coevolution struggles to perform efficiently when its components do not contribute equally to the overall objective value. This is of crucial importance on large-scale optimization problems where such difference are further magnified. To resolve this imbalance problem, we extend the standard cooperative coevolution to a new generic framework capable of learning the contribution of each component using multi-armed bandit techniques. The new framework allocates the computational resources to each component proportional to their contributions towards improving the overall objective value. This approach results in a more economical use of the limited computational resources. We study different aspects of the proposed framework in the light of extensive experiments. Our empirical results confirm that even a simple bandit-based credit assignment scheme can significantly improve the performance of cooperative coevolution on large-scale continuous problems, leading to competitive performance as compared to the state-of-the-art algorithms.
  • Networked correlation-aware manufacturing service supply chain
           optimization using an extended artificial bee colony algorithm
    • Abstract: Publication date: Available online 18 December 2018Source: Applied Soft ComputingAuthor(s): Shuai Zhang, Song Xu, Xiaoling Huang, Wenyu Zhang, Mingzhou Chen Manufacturing service supply chain (MSSC) optimization has been intensively studied to find an optimal service composition solution with the best quality of service (QoS) value. However, traditional MSSC optimization methods usually assume that candidate services are independent of one another. Therefore, potentially better MSSC solutions may have been neglected by not considering the positive influence of correlations between services on the QoS value. This study proposes a novel networked correlation-aware manufacturing service composition (NCMSC) mathematical model to characterize the influence of vertical and horizontal correlations between services on the QoS value of MSSC solution. To solve the NCMSC model, an extended artificial bee colony (ABC) algorithm is proposed to find a near-optimal solution with the best QoS value. The specific improvements to the original ABC algorithm include the following: (1) a new matrix-based encoding scheme is proposed to describe the MSSC solution in which each column contains a vertical composite structure and collaborative services for each subtask; (2) the migration operator of a biogeography-based optimization algorithm is combined with the original ABC algorithm to address the discrete MSSC optimization problem and improve the performance of the original ABC algorithm. The results of the experiments illustrate the importance of networked correlations between services, better practicality, effectiveness, and efficiency of the extended ABC algorithm in solving the optimization problem of MSSC.
  • Complex network oriented artificial bee colony algorithm for global
           bi-objective optimization in three-echelon supply chain
    • Abstract: Publication date: Available online 18 December 2018Source: Applied Soft ComputingAuthor(s): Jianhua Jiang, Di Wu, Yujun Chen, Keqin Li Finding the best flow patterns (i.e., choices of resources) for a family of products is a key part of supply chain management. It primarily focuses on reasonable selecting suppliers for every component, selecting plants for assembling every sub- or final assembly, and selecting the delivery options to bring products to customers. Different selecting operations form different cost and lead-time. Balancing a trade-off between cost and lead-time is a non-trivial problem in a three-echelon supply chain, which forms a complex network. We focus on finding the best flow patterns in which reasonable selections can be formed together to provide products or services. The objective is to minimize the bi-objective of cost and lead-time for any product. In this paper, we propose a complex network oriented artificial bee colony algorithm, which can be processed in parallel, to tackle the so-called combinatorial problem. Besides, we employ simulated annealing and gradient descent to find global Pareto optimal solutions in a supply chain network. Extensive experiments on the three-echelon supply chain network demonstrate the superiority of our proposals: (1) the proposed CN-ABC and CN-ABC-SAGD have the capability of discovering global POS in a complex three-echelon SCN; (2) the speed of searching global POS is accelerated to satisfy the requirement of its complexity of a SCN.
  • A new semantic-based feature selection method for spam filtering
    • Abstract: Publication date: Available online 17 December 2018Source: Applied Soft ComputingAuthor(s): José R. Méndez, Tomás R. Cotos-Yañez, David Ruano-Ordás The Internet emerged as a powerful infrastructure for the worldwide communication and interaction of people. Some unethical uses of this technology (for instance spam or viruses) generated challenges in the development of mechanisms to guarantee an affordable and secure experience concerning its usage. This study deals with the massive delivery of unwanted content or advertising campaigns without the accordance of target users (also known as spam). Currently, words (tokens) are selected by using feature selection schemes; they are then used to create feature vectors for training different Machine Learning (ML) approaches. This study introduces a new feature selection method able to take advantage of a semantic ontology to group words into topics and use them to build feature vectors.To this end, we have compared the performance of nine well-known ML approaches in conjunction with (i) Information Gain, the most popular feature selection method in the spam-filtering domain and (ii) Latent Dirichlet Allocation, a generative statistical model that allows sets of observations to be explained by unobserved groups that describe why some parts of the data are similar, and (iii) our semantic-based feature selection proposal. Results have shown the suitability and additional benefits of topic-driven methods to develop and deploy high-performance spam filters.
  • An improved grey group decision-making approach
    • Abstract: Publication date: Available online 17 December 2018Source: Applied Soft ComputingAuthor(s): Yong Liu, Jun-liang Du, Yu-hong Wang In complex group decision-making, decision makers and decision attributes are the core of the relevant activities. Targeting the problem of scheme ranking and behavioral characteristics that exist in group decision-making, from the perspective of group negotiation and decision-making system coordination, by exploiting the grey target and grey relation analysis, this paper establishes a novel grey group decision-making approach. We define a group measure matrix of scheme, consensus ideal scheme, and decision-making resource coefficient. Then, by borrowing Nash’s bargaining idea, and maximizing group negotiation satisfaction and minimizing system coordination deviation, we construct a two-step optimization model to solve for the group consensus ideal scheme and its measure value matrix. In addition, we take decision-making schemes as research objects; and from the two dimensions of decision maker and attribute, we characterize and measure the closeness degree of decision maker information and attribute information by using the distance between the group measure matrices of scheme and consensus ideal scheme, so that we are able to construct a novel grey scheme matrix similar incidence analysis model. Lastly, we take the group decision-making problem of selecting the location of a garbage disposal station as a case analysis, and explore the economic significance and theoretical value of the model.
  • Using chaotic maps for 3D boundary surveillance by quadrotor robot
    • Abstract: Publication date: Available online 17 December 2018Source: Applied Soft ComputingAuthor(s): Payam Shafiei Gohari, Hossein Mohammadi, Sajjad Taghvaei Chaotic maps have been shown to be suitable for applications that require unpredictable behavior. In this paper, the chaotic maps are used for motion planning and control of a quadrotor for boundary surveillance purposes. The chaotic motion would prevent anticipation of the future movement of the quadrotor, especially in hostile situations where an intruding opponent exists. The chaotic trajectories are constructed based on a closed three-dimensional curve representing the boundary. It is demonstrated that the proposed algorithm for 3D planning generates chaotic paths by investigating the Lyapunov exponents. The desired unpredicted trajectory is used as input to an under-actuated control system. The performance of the algorithm is shown by simulation of the control system applied to the dynamic model.Graphical abstractGraphical abstract for this article
  • Mission-oriented ant-team ACO for min-max MTSP
    • Abstract: Publication date: Available online 8 December 2018Source: Applied Soft ComputingAuthor(s): Li-Chih Lu, Tai-Wen Yue The multiple traveling salesman problem (mTSP) is a combinatorial optimization problem and an extension of the famous traveling salesman problem (TSP). Not only does the mTSP possess academic research value, but its application is extensive. For example, the vehicle routing problem and operations scheduling can all be reduced to mTSP solutions. The mTSP is an NP-hard problem, and multifaceted discussions of its solutions are worthwhile. This study assigned ants to teams with mission-oriented approaches to enhance ant colony optimization algorithms. Missions were appointed to ant teams before they departed (each ant had a different focal search direction). In addition to attempting to complete its own mission, each ant used the Max-Min strategy to work together to optimize the solution. The goal of appointing missions is to reduce the total distance, whereas the goal of using the max–min search method for paths was to achieve Min-Max, or the goal of labor balance. Four main elements were involved in the search process of the ant teams: mission pheromone, path pheromone, greedy factor, and Max-Min ant firing scheme. The experimental results revealed this novel approach to be constructive and effective.
  • A novel algorithm based on information diffusion and fuzzy MADM methods
           for analysis of damages caused by diabetes crisis
    • Abstract: Publication date: Available online 7 December 2018Source: Applied Soft ComputingAuthor(s): M. Ebrahimi, K. Ahmadi Diabetes mellitus is one of the most common chronic diseases in the world. A remarkable point about chronic diseases is that patients may be involved in related complications throughout their lifetime. Management and control of chronic diseases are one of the most costly and important issues in the healthcare field due to their long term treatment period. In this paper, the number of patients involved in complications of type II diabetes has been studied in 19 years and the goal is to calculate the severity of damage caused by diabetes by a new hybrid algorithm which is consisted of multi criteria decision making, variable fuzzy set theory and information diffusion method for the first time. However, the traditional probability statistical methods ignore the fuzziness of risk assessment with incomplete data sets and require a large sample size of data, the new proposed algorithm deals effectively with these challenges and produces more accurate results with fewer errors. For example in terms of working with small data samples, it will be shown that the mean error of results produced by hybrid algorithm is 0.0198 while it is 0.0264 for multiple regression and 0.0299 for cox proportional hazard method which means 33.3% and 51% error increase in proportion to hybrid algorithm respectively.
  • Multiple Empirical Kernel Learning with Majority Projection for imbalanced
    • Abstract: Publication date: Available online 7 December 2018Source: Applied Soft ComputingAuthor(s): Zonghai Zhu, Zhe Wang, Dongdong Li, Wenli Du Traditional Multiple Empirical Kernel Learning (MEKL) expands the expressions of the sample and brings better classification ability by using different empirical kernels to map the original data space into multiple kernel spaces. To make MEKL suit for the imbalanced problems, this paper introduces a weight matrix and a regularization term into MEKL. The weight matrix assigns high misclassification cost to the minority samples to balanced misclassification cost between minority and majority class. The regularization term named Majority Projection (MP) is used to make the classification hyperplane fit the distribution shape of majority samples and enlarge the between-class distance of minority and majority class. The contributions of this work are: (i) assigning high cost to minority samples to deal with imbalanced problems, (ii) introducing a new regularization term to concern the property of data distribution, (iii) and modifying the original PAC-Bayes bound to test the error upper bound of MEKL-MP. Through analyzing the experimental results, the proposed MEKL-MP is well suited to the imbalanced problems and has lower generalization risk in accordance with the value of PAC-Bayes bound.
  • Particle swarm optimization with damping factor and cooperative mechanism
    • Abstract: Publication date: Available online 6 December 2018Source: Applied Soft ComputingAuthor(s): Mingfu He, Mingzhe Liu, Ruili Wang, Xin Jiang, Bingqi Liu, Helen Zhou A novel variant of particle swarm optimization with damping factor and cooperation mechanism (PSO-DFCM) to search the global optima in a large scale and high-dimensional searching space. In this optimal searching strategy, one balances the exploring and exploiting abilities of particles by introducing a new parameter, a damping factor α, which is used to adjust the position information inherited from the previous state. The cooperative mechanism between the global-best-oriented and the local-best-oriented swarms is employed to help find the global optima quickly. In order to reduce the negative effect of unfavourable particles on swarm evolution, a new concept of evolution history, the least optimal particle in individuals’ histories– pleast, is defined to decide whether current information of particles is abandoned and reinitialized in our proposal. Also, fuzzy c-means clustering is applied to cluster the particles’ positions for the neighbourhood establishment of individuals. Our comparative study on benchmark test functions demonstrates that the proposed PSO outperforms the standard PSO and three state-of-art variants of PSO in terms of global optimum convergence and final optimal results.
  • Predicting insertion positions in word-level machine translation quality
    • Abstract: Publication date: Available online 30 November 2018Source: Applied Soft ComputingAuthor(s): Miquel Esplà-Gomis, Felipe Sánchez-Martínez, Mikel L. Forcada Word-level machine translation (MT) quality estimation (QE) is usually formulated as the task of automatically identifying which words need to be edited (either deleted or replaced) in a translation T produced by an MT system. The advantage of estimating MT quality at the word level is that this information can be used to guide post-editors since it enables the identification of the specific words in T that need to be edited in order to ease their work. However, word-level MT QE, as defined in the current literature, has an obvious limitation: it does not identify the positions in T in which missing words need to be inserted. To deal with this limitation, we propose a method which identifies both word deletions and insertion positions in T. This is, to the best of our knowledge, the first approach allowing the identification of insertion positions in word-level MT QE. The method proposed can use any source of bilingual information —such as MT, dictionaries, or phrase-level translation memories— to extract features that are then used by a neural network to produce a prediction for both words and insertion positions (gaps between words) in the translation T. In this paper, several feature sets and neural network architectures are explored and evaluated on publicly-available datasets used in previous evaluation campaigns for word-level MT QE. The results confirm the feasibility of the proposed approach, as well as the usefulness of sharing information between the two prediction tasks in order to obtain more reliable quality estimations.
  • A many-objective evolutionary algorithm with epsilon-indicator direction
    • Abstract: Publication date: Available online 16 December 2018Source: Applied Soft ComputingAuthor(s): Yun Yang, Jianping Luo, Lei Huang, Qiqi Liu The major difficulty in multi-objective optimization evolutionary algorithms (MOEAs) is how to find an appropriate solution which is able to converge towards the true Pareto Front with high diversity. In order to strengthen the selection pressure of the algorithms, indicator-based algorithms have been proposed to handle many-objective optimization problems (MaOPs), among which binary addition quality indicator Iε+ is superior to other indicators in terms of low computational complexity. However it often has edge effects which degrade the performance of MOEA. In this work, we devise a new MOEA approach, which is able to combine binary addition quality indicator Iε+ with direction vector (EDV), to address MaOPs. At the same time, an efficient resource allocation strategy is developed to improve the diversity distribution of solutions. Simulation results are presented to show that EDV outperforms state-of-the-art approaches in all problems considered in this paper, and takes a great advantage in solving the black box problem.
  • GPU implementation of Borůvka’s algorithm to Euclidean minimum spanning
           tree based on Elias method
    • Abstract: Publication date: Available online 12 December 2018Source: Applied Soft ComputingAuthor(s): Wen-bao Qiao, Jean-charles Créput We present both sequential and data parallel approaches to build hierarchical minimum spanning forest (MSF) or tree (MST) in Euclidean space (EMSF/EMST) for applications whose input N points are uniformly or boundedly distributed in Euclidean space. Each iteration of the sequential approach takes O(N) time complexity through combining Borvka’s algorithm with an improved component-based neighborhood search algorithm, namely sliced spiral search, which is a newly proposed improvement to Bentley’s spiral search for finding a component graph’s closest outgoing point on the plane. It works based on the uniqueness property in Euclidean space, and allows O(1) time complexity for one search from a query point to find the component’s closest outgoing point at different iterations of Borvka’s algorithm. The data parallel approach includes a newly proposed two-direction breadth-first search (BFS) implementation on graphics processing unit (GPU) platform, which is specialized for selecting a spanning tree’s minimum outgoing weight. This GPU parallel two-direction BFS enables a tree traversal operation to start from any one of its vertex acting as root. The GPU parallel implementations work by assigning N threads with one thread associated to one input point, one thread occupies O(1) local memory and the whole algorithm occupies O(N) global memory. Experiments are conducted on point set in the plane of both uniformly distributed data sets and TSPLIB database. We evaluate computation time of the proposed approaches on more than 80 benchmarks with size N growing up to 106 points on personal laptop.
  • A transform-based fast fuzzy C-means approach for high brain MRI
           segmentation accuracy
    • Abstract: Publication date: Available online 11 December 2018Source: Applied Soft ComputingAuthor(s): Chandan Singh, Anu Bala Segmentation of brain magnetic resonance (MR) images has a significant impact on the computer-aided diagnosis and analysis. However, due to the presence of noise in medical images, many segmentation methods suffer from limited segmentation accuracy. To reduce the effect of noise and achieve high segmentation accuracy many approaches based on the local and nonlocal information in the spatial domain have been proposed in the past. Recently, the authors have proposed a discrete cosine transform (DCT)-based local and nonlocal fuzzy C-means method (DCT-LNLFCM) which performs much better than the existing methods. However, the method is slow in speed. This paper presents a fast DCT-based nonlocal fuzzy C-means (DCT-NLFCM) segmentation method which is not only very fast than the DCT-LNLFCM, but also provides better segmentation results. The proposed method uses DCT-based MR pre-filtered image to achieve high segmentation accuracy and its histogram enables to achieve very high computation speed. Detailed experiments are conducted to establish the superiority of the proposed method over the state-of-the-art unsupervised methods.
  • Facial expression recognition using iterative universum twin support
           vector machine
    • Abstract: Publication date: Available online 11 December 2018Source: Applied Soft ComputingAuthor(s): Bharat Richhariya, Deepak Gupta Facial expressions are one of the most important characteristics of human behaviour. They are very useful in applications on human computer interaction. To classify facial emotions, different feature extraction methods are used with machine learning techniques. In supervised learning, information about the distribution of data is given by data points not belonging to any of the classes. These data points are known as universum data. In this work, we use universum data to perform multiclass classification of facial emotions from human facial images. Moreover, the existing universum based models suffer from the drawback of high training cost, so we propose an iterative universum twin support vector machine (IUTWSVM) using Newton method. Our IUTWSVM gives good generalization performance with less computation cost. To solve the optimization problem of proposed IUTWSVM, no optimization toolbox is required. Further, improper selection of universum points always leads to degraded performance of the model. For generating better universum, a novel scheme is proposed in this work based on information entropy of data. To check the effectiveness of proposed IUTWSVM, several numerical experiments are performed on benchmark real world datasets. For multiclass classification of facial emotions, the performance of IUTWSVM is compared with existing algorithms using different feature extraction techniques. Our proposed algorithm shows better generalization performance with less training cost in both binary as well as multiclass classification problems.
  • Modelling of a surface marine vehicle with kernel ridge regression
           confidence machine
    • Abstract: Publication date: Available online 10 December 2018Source: Applied Soft ComputingAuthor(s): David Moreno-Salinas, Raul Moreno, Augusto Pereira, Joaquin Aranda, Jesus M. de la Cruz This paper describes the use of Kernel Ridge Regression (KRR) and Kernel Ridge Regression Confidence Machine (KRRCM) for black box identification of a surface marine vehicle. Data for training and test have been obtained from several manoeuvres typically used for marine system identification. Thus, a 20/20 degrees Zig-Zag, a 10/10 degrees Zig-Zag, and different evolution circles have been employed for the computation and validation of the model. Results show that the application of conformal prediction provides an accurate model that reproduces with large accuracy the actual behaviour of the ship with confidence margins that ensure that the model response is within these margins, making it a suitable tool for system identification.
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-