for Journals by Title or ISSN
for Articles by Keywords
help
  Subjects -> COMPUTER SCIENCE (Total: 2072 journals)
    - ANIMATION AND SIMULATION (31 journals)
    - ARTIFICIAL INTELLIGENCE (102 journals)
    - AUTOMATION AND ROBOTICS (105 journals)
    - CLOUD COMPUTING AND NETWORKS (63 journals)
    - COMPUTER ARCHITECTURE (10 journals)
    - COMPUTER ENGINEERING (11 journals)
    - COMPUTER GAMES (21 journals)
    - COMPUTER PROGRAMMING (26 journals)
    - COMPUTER SCIENCE (1202 journals)
    - COMPUTER SECURITY (46 journals)
    - DATA BASE MANAGEMENT (14 journals)
    - DATA MINING (36 journals)
    - E-BUSINESS (22 journals)
    - E-LEARNING (29 journals)
    - ELECTRONIC DATA PROCESSING (22 journals)
    - IMAGE AND VIDEO PROCESSING (40 journals)
    - INFORMATION SYSTEMS (107 journals)
    - INTERNET (93 journals)
    - SOCIAL WEB (51 journals)
    - SOFTWARE (33 journals)
    - THEORY OF COMPUTING (8 journals)

COMPUTER SCIENCE (1202 journals)                  1 2 3 4 5 6 7 | Last

Showing 1 - 200 of 872 Journals sorted alphabetically
3D Printing and Additive Manufacturing     Full-text available via subscription   (Followers: 20)
Abakós     Open Access   (Followers: 4)
ACM Computing Surveys     Hybrid Journal   (Followers: 28)
ACM Journal on Computing and Cultural Heritage     Hybrid Journal   (Followers: 8)
ACM Journal on Emerging Technologies in Computing Systems     Hybrid Journal   (Followers: 14)
ACM Transactions on Accessible Computing (TACCESS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 15)
ACM Transactions on Applied Perception (TAP)     Hybrid Journal   (Followers: 5)
ACM Transactions on Architecture and Code Optimization (TACO)     Hybrid Journal   (Followers: 9)
ACM Transactions on Autonomous and Adaptive Systems (TAAS)     Hybrid Journal   (Followers: 9)
ACM Transactions on Computation Theory (TOCT)     Hybrid Journal   (Followers: 12)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 3)
ACM Transactions on Computer Systems (TOCS)     Hybrid Journal   (Followers: 18)
ACM Transactions on Computer-Human Interaction     Hybrid Journal   (Followers: 15)
ACM Transactions on Computing Education (TOCE)     Hybrid Journal   (Followers: 6)
ACM Transactions on Design Automation of Electronic Systems (TODAES)     Hybrid Journal   (Followers: 5)
ACM Transactions on Economics and Computation     Hybrid Journal   (Followers: 1)
ACM Transactions on Embedded Computing Systems (TECS)     Hybrid Journal   (Followers: 4)
ACM Transactions on Information Systems (TOIS)     Hybrid Journal   (Followers: 19)
ACM Transactions on Intelligent Systems and Technology (TIST)     Hybrid Journal   (Followers: 8)
ACM Transactions on Interactive Intelligent Systems (TiiS)     Hybrid Journal   (Followers: 3)
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Reconfigurable Technology and Systems (TRETS)     Hybrid Journal   (Followers: 6)
ACM Transactions on Sensor Networks (TOSN)     Hybrid Journal   (Followers: 8)
ACM Transactions on Speech and Language Processing (TSLP)     Hybrid Journal   (Followers: 9)
ACM Transactions on Storage     Hybrid Journal  
ACS Applied Materials & Interfaces     Hybrid Journal   (Followers: 31)
Acta Automatica Sinica     Full-text available via subscription   (Followers: 2)
Acta Informatica Malaysia     Open Access  
Acta Universitatis Cibiniensis. Technical Series     Open Access  
Ad Hoc Networks     Hybrid Journal   (Followers: 11)
Adaptive Behavior     Hybrid Journal   (Followers: 11)
Advanced Engineering Materials     Hybrid Journal   (Followers: 28)
Advanced Science Letters     Full-text available via subscription   (Followers: 10)
Advances in Adaptive Data Analysis     Hybrid Journal   (Followers: 7)
Advances in Artificial Intelligence     Open Access   (Followers: 15)
Advances in Calculus of Variations     Hybrid Journal   (Followers: 4)
Advances in Catalysis     Full-text available via subscription   (Followers: 5)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 19)
Advances in Computer Engineering     Open Access   (Followers: 4)
Advances in Computer Science : an International Journal     Open Access   (Followers: 14)
Advances in Computing     Open Access   (Followers: 2)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 54)
Advances in Engineering Software     Hybrid Journal   (Followers: 28)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 14)
Advances in Human Factors/Ergonomics     Full-text available via subscription   (Followers: 23)
Advances in Human-Computer Interaction     Open Access   (Followers: 20)
Advances in Materials Science     Open Access   (Followers: 14)
Advances in Operations Research     Open Access   (Followers: 12)
Advances in Parallel Computing     Full-text available via subscription   (Followers: 7)
Advances in Porous Media     Full-text available via subscription   (Followers: 5)
Advances in Remote Sensing     Open Access   (Followers: 49)
Advances in Science and Research (ASR)     Open Access   (Followers: 6)
Advances in Technology Innovation     Open Access   (Followers: 6)
AEU - International Journal of Electronics and Communications     Hybrid Journal   (Followers: 8)
African Journal of Information and Communication     Open Access   (Followers: 9)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 4)
AI EDAM     Hybrid Journal  
Air, Soil & Water Research     Open Access   (Followers: 12)
AIS Transactions on Human-Computer Interaction     Open Access   (Followers: 6)
Algebras and Representation Theory     Hybrid Journal   (Followers: 1)
Algorithms     Open Access   (Followers: 11)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 5)
American Journal of Computational Mathematics     Open Access   (Followers: 4)
American Journal of Information Systems     Open Access   (Followers: 5)
American Journal of Sensor Technology     Open Access   (Followers: 4)
Anais da Academia Brasileira de Ciências     Open Access   (Followers: 2)
Analog Integrated Circuits and Signal Processing     Hybrid Journal   (Followers: 7)
Analysis in Theory and Applications     Hybrid Journal   (Followers: 1)
Animation Practice, Process & Production     Hybrid Journal   (Followers: 5)
Annals of Combinatorics     Hybrid Journal   (Followers: 4)
Annals of Data Science     Hybrid Journal   (Followers: 12)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 12)
Annals of Pure and Applied Logic     Open Access   (Followers: 3)
Annals of Software Engineering     Hybrid Journal   (Followers: 13)
Annual Reviews in Control     Hybrid Journal   (Followers: 8)
Anuario Americanista Europeo     Open Access  
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 2)
Applied and Computational Harmonic Analysis     Full-text available via subscription   (Followers: 1)
Applied Artificial Intelligence: An International Journal     Hybrid Journal   (Followers: 12)
Applied Categorical Structures     Hybrid Journal   (Followers: 2)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 12)
Applied Computer Systems     Open Access   (Followers: 2)
Applied Informatics     Open Access  
Applied Mathematics and Computation     Hybrid Journal   (Followers: 33)
Applied Medical Informatics     Open Access   (Followers: 10)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 5)
Applied Soft Computing     Hybrid Journal   (Followers: 16)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 5)
Applied System Innovation     Open Access  
Architectural Theory Review     Hybrid Journal   (Followers: 3)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 5)
Archive of Numerical Software     Open Access  
Archives and Museum Informatics     Hybrid Journal   (Followers: 144)
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 5)
arq: Architectural Research Quarterly     Hybrid Journal   (Followers: 8)
Artifact     Hybrid Journal   (Followers: 2)
Artificial Life     Hybrid Journal   (Followers: 7)
Asia Pacific Journal on Computational Engineering     Open Access  
Asia-Pacific Journal of Information Technology and Multimedia     Open Access   (Followers: 1)
Asian Journal of Computer Science and Information Technology     Open Access  
Asian Journal of Control     Hybrid Journal   (Followers: 1)
Assembly Automation     Hybrid Journal   (Followers: 2)
at - Automatisierungstechnik     Hybrid Journal   (Followers: 1)
Australian Educational Computing     Open Access   (Followers: 1)
Automatic Control and Computer Sciences     Hybrid Journal   (Followers: 5)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Automatica     Hybrid Journal   (Followers: 12)
Automation in Construction     Hybrid Journal   (Followers: 6)
Autonomous Mental Development, IEEE Transactions on     Hybrid Journal   (Followers: 9)
Basin Research     Hybrid Journal   (Followers: 5)
Behaviour & Information Technology     Hybrid Journal   (Followers: 52)
Big Data and Cognitive Computing     Open Access   (Followers: 2)
Biodiversity Information Science and Standards     Open Access  
Bioinformatics     Hybrid Journal   (Followers: 305)
Biomedical Engineering     Hybrid Journal   (Followers: 15)
Biomedical Engineering and Computational Biology     Open Access   (Followers: 13)
Biomedical Engineering, IEEE Reviews in     Full-text available via subscription   (Followers: 21)
Biomedical Engineering, IEEE Transactions on     Hybrid Journal   (Followers: 37)
Briefings in Bioinformatics     Hybrid Journal   (Followers: 50)
British Journal of Educational Technology     Hybrid Journal   (Followers: 143)
Broadcasting, IEEE Transactions on     Hybrid Journal   (Followers: 12)
c't Magazin fuer Computertechnik     Full-text available via subscription   (Followers: 1)
CALCOLO     Hybrid Journal  
Calphad     Hybrid Journal   (Followers: 2)
Canadian Journal of Electrical and Computer Engineering     Full-text available via subscription   (Followers: 15)
Capturing Intelligence     Full-text available via subscription  
Catalysis in Industry     Hybrid Journal   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 2)
Cell Communication and Signaling     Open Access   (Followers: 2)
Central European Journal of Computer Science     Hybrid Journal   (Followers: 5)
CERN IdeaSquare Journal of Experimental Innovation     Open Access   (Followers: 3)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 3)
Chemometrics and Intelligent Laboratory Systems     Hybrid Journal   (Followers: 14)
ChemSusChem     Hybrid Journal   (Followers: 7)
China Communications     Full-text available via subscription   (Followers: 7)
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
CIN Computers Informatics Nursing     Full-text available via subscription   (Followers: 11)
Circuits and Systems     Open Access   (Followers: 15)
Clean Air Journal     Full-text available via subscription   (Followers: 1)
CLEI Electronic Journal     Open Access  
Clin-Alert     Hybrid Journal   (Followers: 1)
Cluster Computing     Hybrid Journal   (Followers: 1)
Cognitive Computation     Hybrid Journal   (Followers: 4)
COMBINATORICA     Hybrid Journal  
Combinatorics, Probability and Computing     Hybrid Journal   (Followers: 4)
Combustion Theory and Modelling     Hybrid Journal   (Followers: 14)
Communication Methods and Measures     Hybrid Journal   (Followers: 12)
Communication Theory     Hybrid Journal   (Followers: 22)
Communications Engineer     Hybrid Journal   (Followers: 1)
Communications in Algebra     Hybrid Journal   (Followers: 3)
Communications in Computational Physics     Full-text available via subscription   (Followers: 2)
Communications in Information Science and Management Engineering     Open Access   (Followers: 4)
Communications in Partial Differential Equations     Hybrid Journal   (Followers: 3)
Communications of the ACM     Full-text available via subscription   (Followers: 51)
Communications of the Association for Information Systems     Open Access   (Followers: 16)
COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering     Hybrid Journal   (Followers: 3)
Complex & Intelligent Systems     Open Access   (Followers: 1)
Complex Adaptive Systems Modeling     Open Access  
Complex Analysis and Operator Theory     Hybrid Journal   (Followers: 2)
Complexity     Hybrid Journal   (Followers: 6)
Complexus     Full-text available via subscription  
Composite Materials Series     Full-text available via subscription   (Followers: 8)
Computación y Sistemas     Open Access  
Computation     Open Access   (Followers: 1)
Computational and Applied Mathematics     Hybrid Journal   (Followers: 2)
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 2)
Computational and Structural Biotechnology Journal     Open Access   (Followers: 2)
Computational and Theoretical Chemistry     Hybrid Journal   (Followers: 9)
Computational Astrophysics and Cosmology     Open Access   (Followers: 1)
Computational Biology and Chemistry     Hybrid Journal   (Followers: 12)
Computational Chemistry     Open Access   (Followers: 2)
Computational Cognitive Science     Open Access   (Followers: 2)
Computational Complexity     Hybrid Journal   (Followers: 4)
Computational Condensed Matter     Open Access  
Computational Ecology and Software     Open Access   (Followers: 9)
Computational Economics     Hybrid Journal   (Followers: 9)
Computational Geosciences     Hybrid Journal   (Followers: 17)
Computational Linguistics     Open Access   (Followers: 23)
Computational Management Science     Hybrid Journal  
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 8)
Computational Mechanics     Hybrid Journal   (Followers: 5)
Computational Methods and Function Theory     Hybrid Journal  
Computational Molecular Bioscience     Open Access   (Followers: 2)
Computational Optimization and Applications     Hybrid Journal   (Followers: 7)
Computational Particle Mechanics     Hybrid Journal   (Followers: 1)
Computational Research     Open Access   (Followers: 1)
Computational Science and Discovery     Full-text available via subscription   (Followers: 2)
Computational Science and Techniques     Open Access  
Computational Statistics     Hybrid Journal   (Followers: 14)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 30)
Computer     Full-text available via subscription   (Followers: 98)
Computer Aided Surgery     Open Access   (Followers: 6)
Computer Applications in Engineering Education     Hybrid Journal   (Followers: 8)
Computer Communications     Hybrid Journal   (Followers: 16)
Computer Journal     Hybrid Journal   (Followers: 9)
Computer Methods in Applied Mechanics and Engineering     Hybrid Journal   (Followers: 24)
Computer Methods in Biomechanics and Biomedical Engineering     Hybrid Journal   (Followers: 12)
Computer Methods in the Geosciences     Full-text available via subscription   (Followers: 2)

        1 2 3 4 5 6 7 | Last

Journal Cover
Applied Soft Computing
Journal Prestige (SJR): 1.199
Citation Impact (citeScore): 5
Number of Followers: 16  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1568-4946
Published by Elsevier Homepage  [3161 journals]
  • Sequential spectral clustering of hyperspectral remote sensing image over
           bipartite graph
    • Abstract: Publication date: Available online 20 September 2018Source: Applied Soft ComputingAuthor(s): Aidin Hassanzadeh, Arto Kaarna, Tuomo KauranneAbstractUnsupervised classification is a crucial step in remote sensing hyperspectral image analysis where producing labeled data is a laborious task. Spectral Clustering is an appealing graph-partitioning technique with outstanding performance on data with non-linear dependencies. However, Spectral Clustering is restricted to small-scale data and neither has been effectively applied to hyperspectral image analysis. In this paper, the unsupervised classification of hyperspectral images is addressed through a sequential spectral clustering that can be extended to the large-scale hyperspectral image. To this end, this paper utilizes a bipartite graph representation along with a sequential singular value decomposition and mini-batch K-means for unsupervised classification of hyperspectral imagery. We evaluate the proposed algorithm with several benchmark hyperspectral datasets including Botswana, Salinas, Indian Pines, Pavia Center Scene and Pavia University Scene. The experimental results show significant improvements made by the proposed algorithm compared to the state-of-art clustering algorithms.
       
  • An improved multi-objective optimization approach for performance-based
           design of structures using nonlinear time–history analyses
    • Abstract: Publication date: Available online 20 September 2018Source: Applied Soft ComputingAuthor(s): Vahid Mokarram, Mohammad Reza BananAbstractPerformance-based design (PBD) of buildings can be properly addressed in a multi-objective optimization framework. However, computational costs of such an approach will be very expensive especially if nonlinear time–history analysis (NTHA) is used as the evaluation tool. In this paper, significant reductions in computational costs of solving structural multi-objective optimization problems is achieved by proposing a new metaheuristic surrogate model called Surrogate FC-MOPSO. In this method, which is an extension of FC-MOPSO algorithm, NTHA and pushover analysis (PA) are simultaneously employed for evaluating system responses. PAs are adopted as an approximation tool in the surrogate model while the responses corresponding to feasible solutions are always evaluated from NTHAs. The final Pareto optimal solutions, which yield tradeoffs between initial and life cycle costs (LCCs), are already evaluated based on NTHAs. It is shown that application of the proposed method results in substantial reductions of runtime of the considered problems. It is also demonstrated that adopting PAs as the only evaluation tool in optimal performance-based design of structures can result in unreliable solutions.
       
  • Application of a density based clustering technique on biomedical datasets
    • Abstract: Publication date: Available online 20 September 2018Source: Applied Soft ComputingAuthor(s): Md Anisur Rahman, Md Zahidul IslamAbstractThe detection of the number of clusters in a biomedical dataset is very important for generating high quality clusters from the biomedical dataset. In this paper, we aim to evaluate the performance of a density based K-Means clustering technique called DenClust on biomedical datasets. DenClust produces the number of clusters and the high quality initial seeds from a dataset through a density based seed selection approach without requiring an user input on the number of clusters and the radius of the clusters. The high quality initial seeds for K-Means results in high quality clusters from a dataset. The performance of DenClust is compared with six other existing clustering techniques namely CRUDAW-F, CRUDAW-H, AGCUK, GAGR, K-Means, and K-Means++ on the twenty biomedical datasets in terms of two external cluster evaluation criteria namely Entropy and Purity and one internal cluster evaluation criteria called Sum of Squared Error (SSE). We also perform a statistical non-parametric sign test on the cluster evaluation results of the techniques. Both the cluster evaluation results and statistical non-parametric sign test results indicate the superiority of DenClust over the existing techniques on the biomedical datasets. The complexity of DenClust is O(n2) but the overall execution time of DenClust on the datasets is less than the execution time of AGCUK and GAGR having O(n) complexity.
       
  • A novel artificial bee colony algorithm for the workforce scheduling and
           balancing problem in sub-assembly lines with limited buffers
    • Abstract: Publication date: Available online 19 September 2018Source: Applied Soft ComputingAuthor(s): Alkin Yurtkuran, Betul Yagmahan, Erdal EmelAbstractIn this study, a workforce scheduling and balancing problem is solved in unpaced sub-assembly lines with buffers feeding the paced body assembly line of a car manufacturer. The goal is to determine the minimum workforce required to process split lots at sub-assembly stations to feed the paced line over a periodic time window. Limited by a given buffer capacity at each station but with flexible start times for each split lot, an efficient workforce scheduling is possible to prevent shortages in downstream stations. Therefore, a stock-continuity equation has been proposed yielding the size of those split lots. Next, a single-objective Mixed Integer Programming (MIP) model is formulated for the problem as a combination of two implicitly weighted goals to minimise the workforce and the unbalanced workloads. The problem is a variant of workforce scheduling and routing problem with time windows and negligible walking distances. Due to the non-deterministic polynomial-time-hardness of the problem, we proposed an improved Artificial Bee Colony (ABC) algorithm named as discrete ABC with solution acceptance rule and multi-search (SAMSABC). The proposed algorithm is compared with different variants of ABC and other well-known metaheuristic algorithms such as Particle Swarm Optimisation and Differential Evolution on generated test cases. The computational results demonstrate the superiority of the proposed ABC algorithm and reveal that the SAMSABC can achieve accurate results within short computational times.
       
  • The study and application of a novel hybrid system for air quality
           early-warning
    • Abstract: Publication date: Available online 18 September 2018Source: Applied Soft ComputingAuthor(s): Yan Hao, Chengshi TianAbstractAir quality early-warning plays a vital role in improving air quality and human health, especially multi-step ahead air quality early-warning, which is significant for both citizens and environmental protection departments. However, most previous studies have only employed simple data decomposition to perform one-step forecasting and were aimed at enhancing forecasting accuracy or stability. Little research has improved these two standards simultaneously, leading to poor forecasting performance. Because of its significance, relevant research focused on multi-step ahead air quality early-warning is especially needed. Therefore, in this paper, a novel hybrid air quality early-warning system, which consists of four modules: data preprocessing module, optimization module, forecasting module and evaluation module, is proposed to perform multi-step ahead air quality early-warning. In this system, an effective data decomposition method called the modified complete ensemble empirical mode decomposition with adaptive noise is developed to effectively extract the characteristics of air quality data and to further improve the forecasting performance. Moreover, the hybrid Elman neural network model, optimized by the multi-objective salp swarm algorithm, is successfully developed in the forecasting module and simultaneously achieves high forecasting accuracy and stability. In addition, the evaluation module is designed to conduct a reasonable and scientific evaluation for this system. Three cities in China are employed to test the effectiveness of the proposed early-warning system, and the results reveal that the proposed early-warning system has superior ability in both accuracy and stability than other benchmark models and can be used as a reliable tool for multi-step ahead air quality early-warning.
       
  • Acknowledgement of Reviewers for Volume 5
    • Abstract: Publication date: Available online 19 August 2005Source: Applied Soft ComputingAuthor(s):
       
  • Developing pessimistic-optimistic risk-based methods for multi-sensor
           fusion: An interval-valued evidence theory approach
    • Abstract: Publication date: Available online 15 September 2018Source: Applied Soft ComputingAuthor(s): Hamidreza Seiti, Ashkan HafezalkotobAbstractMulti-sensor plays an important role in monitoring the systems and equipment under consideration. The final integrated information obtained from these sensors is always associated with uncertainties and ambiguities, thus to eliminate these inaccuracies, various uncertainty theories, e.g., Dempster-Shafer theory (DST), have been proposed. In several models, it is optimistically assumed that the sensors always work properly, but in reality, they can fail like any other physical equipment. In the past decades, many researchers calculated the dynamic sensor reliability by comparing the information given by each sensor to those of other sensors. However, such methods are sometimes problematic because there are chances while the considered sensor is working properly, the others can fail. In order to address these problems, our proposed methods introduce a sensor risk factor in the sensor output to enhance the accuracy. In this paper, we present two pessimistic-optimistic models based on the interval-valued and fuzzy-valued DST to model this error factor. Finally, a numerical example is given to exemplify the application of the proposed models.
       
  • A fuzzy goal programming model in portfolio selection under
           competitive-cum-compensatory decision strategies
    • Abstract: Publication date: Available online 15 September 2018Source: Applied Soft ComputingAuthor(s): Mahuya De, B.K. Mangaraj, K.B. DasAbstractMany of the portfolio selection problems involve multiple numbers of conflicting and imprecise criteria in the decision environment. Multi-criteria decision-making approaches using fuzzy logic can tackle such problems when the imprecision is due to fuzziness. There are several fuzzy goal programming models for portfolio selection problems in the literature, either using “Min” or “additive” aggregation operator. While aggregation of decision criteria by “Min” operator stands for a “fully competitive decision strategy”, it represents for a “fully compensatory decision strategy” when we use an “additive operator”. In this paper, we propose a fuzzy goal programming approach using Werner’s “fuzzy and” hybrid operator, which is a combination of “Min”, as well as “arithmetic average” to generate efficient frontier. We obtain efficient solutions for different values of γ∈[0,1], the parameter of compensation for the criteria in the decision environment. We consider three criteria, viz. return, risk and liquidity for the purpose. While we use semi-absolute deviation for measuring risk, we utilize piece-wise linear functions to describe fuzzy return and fuzzy liquidity criteria. Taking stocks from three databases based on market capitalization, viz., NIFTY 50, NIFTY Smallcap 100 and NIFTY 500 from the NSE in India. This model demonstrates efficient portfolios for different strategies in a competitive-cum-compensatory decision environment.
       
  • Solving transcendental equation using artificial neural network
    • Abstract: Publication date: Available online 14 September 2018Source: Applied Soft ComputingAuthor(s): S.K. Jeswal, S. ChakravertyTranscendental equations play important role in solving various science and engineering problems. There exist many transcendental equations, which may not be solved by usual numerical methods. Accordingly, this paper gives a novel idea for solving transcendental equations using the concept of Artificial Neural Network (ANN). Multilayer Network architecture (viz. Four-layer network architecture) has been proposed for solving the transcendental equation. The detail network architecture with the procedures to solve single and system of transcendental equations have been discussed. The weights from input layer to the first hidden layer consist of the unknown variable and other weights in different layers are the known coefficients with respect to the given transcendental equation. After training by proposed steps and back propagation technique starting with a guess value(s) the unknown variable(s) tend to converge depending upon the accuracy thereby giving the solution of the equation Few standard example problems have been presented to validate the proposed method. Further, two examples have been included to show the applicability of the ANN method in comparison to the well-known numerical method. Moreover, an application problem of junction diode circuit has also been addressed.Graphical abstractGraphical abstract for this article
       
  • An ameliorated particle swarm optimizer for solving numerical optimization
           problems
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): Ke Chen, Fengyu Zhou, Yugang Wang, Lei YinAbstractAlthough the particle swarm optimizer (PSO) has been widely used to address various complicated engineering problems, it is likely to suffer lack of diversity and ineffectiveness of balance between the global search ability and the local search ability in the search process. In this paper, we report an innovative and improved optimization method called ameliorated particle swarm optimizer (A-PSO), which is different from the original PSO algorithm and its variants in parameter update and the position generation of each particle. In A-PSO, the nonlinear dynamic acceleration coefficients, logistic map and a modified particle position update approach are introduced in PSO to improve the solution quality and accelerate the global convergence rate. Twenty well-known numerical optimization functions are adopted to evaluate the effectiveness of the proposed method and it is illustrated that, for most numerical optimization problems, the convergence performance and search accuracy of the A-PSO method are superior to the similar heuristic optimization algorithms and other well-known PSO variants. Namely, the proposed A-PSO technique has a faster convergence rate and is more stable than other PSO variants and similar population-based methods for almost all numerical optimization problems. Therefore, the A-PSO method is successfully used as a new optimization technique for solving numerical optimization problems.
       
  • A PCA-assisted hybrid algorithm combining EAs and adjoint methods for
           CFD-based optimization
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): Dimitrios Kapsoulis, Konstantinos Tsiakas, Xenofon Trompoukis, Varvara Asouti, Kyriakos GiannakoglouAbstractHybrid optimization algorithms which combine Evolutionary Algorithms (EAs) and Gradient-Based (GB) methods o refine the most promising solutions, are valuable tools for use in engineering optimization. Several hybrid methods can be found in the literature; differences among them are associated with the criteria used to select individuals for refinement through the GB method and the feedback the EA gets from the latter. GB methods require the gradient of the objective functions with respect to the design variables. By employing the adjoint method in problems governed by partial differential equations, the cost of computing the gradient becomes independent of the number of design variables. For multi-objective optimization problems this paper is exclusively dealing with, the availability of the gradients of all objective functions is not enough. Hybrid algorithms require the computation of descent directions in the objective space capable of improving the current front of non-dominated solutions. Using the sum of weighted objectives as the new objective function is ineffective. In this paper, a method which refrains from using arbitrarily defined weights is proposed. The method is driven by data obtained from the Principal Component Analysis (PCA) of the objective function values of the elite individuals at each generation of the EA. The PCA, with computational cost that of the solution of an eigenproblem, identifies the direction in the objective space along which the current front of non-dominated solutions should be improved. This along with the gradients computed by the adjoint method are used by the GB method to refine selected individuals. The efficiency of the proposed hybrid algorithm is further improved by employing online trained surrogate models or metamodels and Kernel PCA within the EA-based search. The proposed method is demonstrated in aerodynamic shape optimization problems, using in-house Computational Fluid Dynamics software and its adjoint.
       
  • Novel roughness measurement for grinding surfaces using simulated data by
           transfer kernel learning
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): Hang Zhang, Jian Liu, Shengfeng Chen, Weifang WangAbstractIn conventional visual roughness measurement methods, constructing a relationship between an image feature index and surface roughness requires a large number of samples with a wide range of known roughness at uniform intervals as input for training or fitting. Considering these challenges, this paper has proposed a simulated data and transfer kernel learning-based visual roughness measurement method. In the proposed method, a virtual sample with specified roughness is first created via non-Gaussian surface digital simulation and three-dimensional entity modeling technology. After that step, a surface image of the virtual and processed samples is generated through image simulation and actual imaging experiments. Next, the image feature index distribution discrepancy between the simulation and actual domains is adapted by transfer kernel learning. A regression model is trained based on the simulated samples with known roughness, and is later generalized to the actual domain via a cross-domain kernel matrix to predict the roughness of the processed samples. To transfer the similar red and green mixing effects between the actual and simulation domains, a relative mixing degree index and a mixing region area index are designed based on the color information. By comparing these two indexes with the image pixel color difference index and image sharpness index, the feasibility and effectiveness of the proposed method are validated. The experiment results show that the proposed method can achieves an accuracy of over 90% based on the simulated data and transfer kernel learning. The proposed method provides a new improvement strategy for visual roughness measurement.
       
  • Economic dispatch of power systems using an adaptive charged system search
           algorithm
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): P. Zakian, A. KavehAbstractIn this article, an adaptive charged system search (ACSS) algorithm is developed for the solution of the economic dispatch problems. The proposed ACSS is based on the charged system search (CSS) which is a meta-heuristic algorithm utilizing the governing Coulomb law from electrostatics and the Newtonian laws of mechanics. Here, two effective strategies are considered to present the new ACSS. The first one is an improved initialization based on opposite based learning and subspacing techniques. The second one is Levy flight random walk for enriching updating process of the algorithm. Many types of economic dispatch cases comprising 6, 13, 15, 40, 160 and 640 units generation systems are testified as benchmarks ranging from small to large scale problems. These problems entail different constraints consisting of power balance, ramp rate limits, prohibited operating zones and valve point load effects. Additionally, multiple fuel options and transmission losses are included for some test cases. Moreover, simple constraint handling functions are developed in terms of penalty approach which can readily be incorporated into any other meta-heuristic algorithm. Results indicate that the ACSS either outperform or perform well in comparison to the CSS and other optimizers in finding optimized fuel costs.
       
  • Integrating the whale algorithm with Tabu search for quadratic assignment
           problem: A new approach for locating hospital departments
    • Abstract: Publication date: Available online 12 September 2018Source: Applied Soft ComputingAuthor(s): Mohamed Abdel-Baset, M. Gunsekaran, Doaa El-Shahat, Seyedali MirjaliliAbstractThe Quadratic Assignment Problem (QAP) is a combinatorial NP-hard optimization problem that is not solvable in a polynomial time. It has a large number of real-world applications in diverse fields (e.g. facility arrangement in a hospital). The Whale Optimization Algorithm is a new meta-heuristic that achieves a great success in solving the continuous problems. In this paper, we propose a memetic algorithm using the Whale optimization Algorithm (WA) Integrated with a Tabu Search (WAITS) for solving QAP. In fact, this work employs Tabu Search to improve the quality of solution obtained by WA for QAP problem as a local search algorithm. This is an attempt to improve the convergence speed and local search of WA as its main drawbacks. Due to the combinatorial nature of QAP, the continuous values generated from the standard WA were converted to discrete values by the largest real value mapping. The WAITS algorithm is enhanced by a local search that defines a set of neighborhood solutions to improve the accuracy of the obtained solutions. Fourteen different case studies including 122 test problems are employed for analyzing the performance of the proposed WAITS. The results show that the proposed memetic algorithm finds near-optimal solutions with an acceptable computational time. WAITS is compared to several algorithms in the literature. The results show that the proposed algorithm outperforms similar algorithms in the literature.
       
  • A multi-objective evolutionary algorithm based on decomposition and
           constraint programming for the multi-objective team orienteering problem
           with time windows
    • Abstract: Publication date: Available online 9 September 2018Source: Applied Soft ComputingAuthor(s): Wanzhe Hu, Mahdi Fathi, Panos M. PardalosAbstractThe team orienteering problem with time windows (TOPTW) is a well-known variant of the orienteering problem (OP) originated from the sports game of orienteering. Since the TOPTW has many applications in the real world such as disaster relief routing and home fuel delivery, it has been studied extensively. In the classical TOPTW, only one profit is associated with each checkpoint while in many practical applications each checkpoint can be evaluated from different aspects, which results in multiple profits. In this study, the multi-objective team orienteering problem with time windows (MOTOPTW), where checkpoints with multiple profits are considered, is introduced to find the set of Pareto optimal solutions to support decision making. Moreover, a multi-objective evolutionary algorithm based on decomposition and constraint programming (CPMOEA/D) is developed to solve the MOTOPTW. The advantages of decomposition approaches to handle multi-objective optimization problems and those of the constraint programming to deal with combinatorial optimization problems have been integrated in CPMOEA/D. Finally, the proposed algorithm is applied to solve public benchmark instances. The results are compared with the best-known solutions from the literature and show more improvement.
       
  • A robust power system stabilizer for enhancement of stability in power
           system using adaptive fuzzy sliding mode control
    • Abstract: Publication date: Available online 9 September 2018Source: Applied Soft ComputingAuthor(s): Prakash K. Ray, Shiba R. Paital, Asit Mohanty, Foo Y.S. Eddy, Hoay Beng GooiAbstractThis paper presents design of power system stabilizer (PSS) based on conventional fuzzy-PID and type-1 fuzzy controller for stability improvements in single as well as multi machine power system connected to infinite bus under different loading conditions. Again, fuzzy and integral sliding mode controllers (FSMC and I-SMC) are being incorporated with PSS into the power system to improve the stability performance. But, the presence of chattering in this controller may lead to low frequency oscillations of smaller magnitudes that can sustain to disturb the power transfer capability and the stability of the system. Therefore, to enhance the performance and efficiency of the power system, a novel adaptive fuzzy sliding mode controller with a robust sliding surface is designed to overcome the possible chattering due to system uncertainties and dynamics. In the proposed adaptive fuzzy sliding mode controller (AFSMC), the stability is ensured through Lyapunov analysis and synthesis test. In addition to the graphical simulation analysis, a quantitative stability approach and real-time test using OPAL-RT OP5600 is also carried out in order to augment the stability study. Further, stability test using eigen modes, root locus and Bode plots are presented to assess the stability performance of the proposed controller. Both the qualitative and quantitative analysis ensures better and robust performance of proposed controllers in comparison to the conventional fuzzy-PID and type-1 fuzzy controller.
       
  • Adoption of an improved PSO to explore a compound multi-objective energy
           function in protein structure prediction
    • Abstract: Publication date: Available online 8 September 2018Source: Applied Soft ComputingAuthor(s): Shuangbao Song, Junkai Ji, Xingqian Chen, Shangce Gao, Zheng Tang, Yuki TodoAbstractThe protein structure prediction (PSP) problem, i.e., predicting the three-dimensional structure of a protein from its sequence, remains challenging in computational biology. The inaccuracy of existing protein energy functions and the huge conformation search space make the problem difficult to solve. In this study, the PSP problem is modeled as a multi-objective optimization problem. A physics-based energy function and a knowledge-based energy function are combined to construct the three-objective energy function. An improved multi-objective particle swarm optimization coupled with two archives is employed to execute the conformation space search. In addition, a mechanism based on Pareto non-dominated sorting is designed to properly address the slightly worse solutions. Finally, the experimental results demonstrate the effectiveness of the proposed approach. A new perspective for solving the PSP problem by means of multi-objective optimization is given in this paper.
       
  • Hamiltonian path based image steganography scheme with improved
           imperceptibility and undetectability
    • Abstract: Publication date: Available online 8 September 2018Source: Applied Soft ComputingAuthor(s): Gyan Singh Yadav, Aparajita OjhaAbstractSteganography is the art of concealing secret information in some cover medium (picture, text etc.) which is being used at large scale in different security systems. Various schemes of data hiding in images have been proposed over the last two decades that focus on improving the visual quality of the cover image. But there is always a trade-off between imperceptibility and payload. Another important issue is data security. It is known that histograms can reveal the existence and quantum of data hidden in a stego image. So, the main challenge of high capacity data embedding schemes is to reduce the image distortion as well as to keep the data secure by making use of alternate mechanisms such as secret keys. In this paper, we propose a novel data hiding scheme with two-fold objective. Data security is ensured by making use of a Hamiltonian path to generate a secret key for accessing data embedding patterns. Due to the complexity of Hamiltonian path generation, the embedding patterns become almost impossible to guess, even if the presence of secret data is observed using steganographic attacks. Secondly, two optimization strategies are proposed to minimize the embedding effect on the stego image. While the first strategy focuses on reducing the bit-flipping cost, the other strategy improves histogram distortion. The proposed method is shown to outperform some of the existing and recent approaches both in terms of payload and imperceptibility.
       
  • Self-evolving type-2 fuzzy brain emotional learning control design for
           chaotic systems using PSO
    • Abstract: Publication date: Available online 8 September 2018Source: Applied Soft ComputingAuthor(s): Tien-Loc Le, Chih-Min Lin, Tuan-Tu HuynhAbstractThis work presents a design of interval type-2 fuzzy brain emotional learning control (T2FBELC) combining with the self-evolving algorithm to help the network to automatically achieve the optimum construction from the empty initial rule. In the control system design, the T2FBELC is the main controller used to mimic an idea controller, and a robust controller is a compensator for the compensation of the residual error. Implementing the steepest descent gradient approach, the parameter adaptive laws of the proposed system are derived. Besides, the particle swarm optimization (PSO) is applied to find the optimal learning rates for the parameter adaptive laws. The stability of the proposed algorithm is guaranteed using the Lyapunov function. Finally, the effectiveness of the proposed control system is verified by numerical simulations of the chaotic systems.
       
  • DC+power+systems&rft.title=Applied+Soft+Computing&rft.issn=1568-4946&rft.date=&rft.volume=">Modified differential evolution approach for practical optimal reactive
           power dispatch of hybrid AC − DC power systems
    • Abstract: Publication date: Available online 8 September 2018Source: Applied Soft ComputingAuthor(s): Samir SayahAbstractOptimal reactive power dispatch (ORPD) is one of the most important functions of modern energy management systems, used to ensure stable and secure operating conditions of power networks. Currently, High voltage direct current (HVDC) transmission systems are widely used in modern electrical power systems due to their economic and technical advantages. The incorporation of DC link introduces more complexity in ORPD computation. In this paper, an efficient enhanced differential evolution algorithm with new mutation strategy is proposed to solve the ORPD problem incorporating a two terminal HVDC transmission link. To the best of the author’s knowledge, no one has investigated the performance of DE on ORPD task.The performance of the proposed strategy has been investigated on two hybrid AC−DC systems, which are the modified New England 39−bus test system, and the practical and large-scale Algerian 114−bus electric network. To illustrate the effectiveness and potential of the proposed approach, the achieved results were evaluated and compared to those reported recently in literature.
       
  • Hierarchical extreme learning machine based image denoising network for
           visual Internet of things
    • Abstract: Publication date: Available online 8 September 2018Source: Applied Soft ComputingAuthor(s): Yifan Yang, Hong Zhang, Ding Yuan, Daniel Sun, Guoqiang Li, Rajiv Ranjan, Mingui SunAbstractIn the visual Internet of things (VIoT), imaging sensors must achieve a balance between limited bandwidth and useful information when images contain heavy noise. In this paper, we address the problem of removing heavy noise and propose a novel hierarchical extreme learning machine-based image denoising network, which comprises a sparse auto-encoder and a supervised regression. Due to the fast training of a hierarchical extreme learning machine, an effective image denoising system that is robust for various noise levels can be trained more efficiently than other denoising methods, using a deep neural network. Our proposed framework also contains a non-local aggregation procedure that aims to fine-tune noise reduction according to structural similarity. Compared to the compression ratio in noisy images, the compression ratio of denoised images can be dramatically improved. Therefore, the method can achieve a low communication cost for data interactions in the VIoT. Experimental studies on images, including both hand-written digits and natural scenes, have demonstrated that the proposed technique achieves excellent performance in suppressing heavy noise. Further, it greatly reduces the training time, and outperforms other state-of-the-art approaches in terms of denoising indexes for the peak signal-to-noise ratio (PSNR) or the structural similarity index (SSIM).
       
  • An adaptive fuzzy predictive control of nonlinear processes based on
           Multi-Kernel least squares support vector regression
    • Abstract: Publication date: Available online 7 September 2018Source: Applied Soft ComputingAuthor(s): I. Boulkaibet, K. Belarbi, S. Bououden, M. Chadli, T. MarwalaAbstractIn this paper, an adaptive fuzzy Generalized Predictive Control (GPC) is proposed for discrete-time nonlinear systems via Takagi-Sugeno system based Multi-Kernel Least Squares Support Vector Regression (TS-LSSVR). The proposed adaptive TS-LSSVR strategy is constructed using a multi-kernel least squares support vector regression where the learning procedure of the proposed TS-LSSVR is achieved in three steps: In the first step, which is an offline step, the antecedent parameters of the TS-LSSVR are initialized using a fuzzy c-means clustering algorithm. The second step, which is an online step, deals with the adaptation of the antecedent parameters which can be implemented using a back-propagation algorithm. Finally, the last online step is to use the Fixed-Budget Kernel Recursive Least Squares algorithm to obtain the consequent parameters. Furthermore, an adaptive generalized predictive control for nonlinear systems is introduced by integrating the proposed adaptive TS-LSSVR into the generalized predictive controller (GPC). The reliability of the proposed adaptive TS-LSSVR GPC controller is investigated by controlling two nonlinear systems: A surge tank and continuous stirred tank reactor (CSTR) systems. The proposed TS-LSSVR GPC controller has demonstrated good results and efficiently controlled the nonlinear plants. Furthermore, the adaptive TS-LSSVR GPC has the ability to deal with disturbances and variations in the nonlinear systems.
       
  • Automatically discovering clusters of algorithm and problem instance
           behaviors as well as their causes from experimental data, algorithm
           setups, and instance features
    • Abstract: Publication date: Available online 5 September 2018Source: Applied Soft ComputingAuthor(s): Thomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, Ke TangAbstractIn the fields of heuristic optimization and machine learning, experimentation is the way to assess the performance of an algorithm setup and the hardness of problems. Most algorithms in the domain are anytime algorithms, meaning that they can improve their approximation quality over time. This means that one algorithm may initially perform better than another one, but converge to worse solutions in the end. Instead of single final results, the whole runtime behavior of algorithms needs to be compared. Moreover, a researcher does not just want to know which algorithm performs best and which problem is the hardest – she/he wants to know why. In this paper, we introduce a process which can 1) automatically model the progress of algorithm setups on different problem instances based on data collected in experiments, 2) use these models to discover clusters of algorithm (or problem instance) behaviors, and 3) propose causes why a certain algorithm setup (or problem instance) belongs to a certain algorithm (or problem instance) behavior cluster. These high-level conclusions are presented in form of decision trees relating algorithm parameters (or instance features) to cluster ids. We emphasize the duality of analyzing algorithm setups and problem instances. Our process is implemented as open source software and tested in two case studies, on the Maximum Satisfiability Problem and the Traveling Salesman Problem. Besides its basic application to raw experimental data, yielding clusters and explanations of “quantitative” algorithm behavior, our process also allows for “qualitative” conclusions by feeding it with data which is normalized based on problem features or algorithm parameters. It can also be applied recursively, e.g., to further investigate the behavior of the algorithms in the cluster with the best-performing setups on the problem instances belonging to the cluster of hardest instances. Both use cases are investigated in the case studies. We conclude our article by a comprehensive analysis of the drawbacks of our method and with suggestions on how it can be improved.
       
  • Improved optimal harmonic reduction method in PWM AC–AC converter using
           modified Biogeography-Based Optimization Algorithm
    • Abstract: Publication date: Available online 5 September 2018Source: Applied Soft ComputingAuthor(s): Mohammad Esmaeil Hassanzadeh, Saeed HasanvandAbstractPulse width modulation (PWM) AC-AC converter is a power electronic device which has many applications, because it can control the load voltage and active power. This converter cuts the voltage wave form, so the output voltage will have harmonics. Since the fundamental component of output voltage influences the active power, assigning weighting factor to the optimization problem of harmonic reduction in the converter has been proposed in this paper. To solve the optimization problem a hybrid algorithm including biogeography-based optimization BBO and simulated annealing (SA) has been presented. This algorithm has the advantages of the two method simultaneously. Since many evolutionary optimization algorithms may not reach the global optimum solution, in order to develop the search process, a local search algorithm in an inner loop i.e. SA is applied to the obtained solution by main search algorithm i.e. BBO In the other word, the second level optimization by SA is applied to the best answer of BBO in each iteration, to find the global extreme point. The proposed algorithm has been applied to some standard benchmark functions. After validation using these problems, the main problem which is a practical problem to minimize the harmonics for PWM AC-AC converter has been solved. Simulation results show the efficiency of proposed method by considering weighting factor to optimize a practical problem in power engineering area in order to harmonic reduction as well as improving fundamental voltage as an important issue.
       
  • A decomposition-based multi-objective optimization approach considering
           multiple preferences with robust performance
    • Abstract: Publication date: Available online 5 September 2018Source: Applied Soft ComputingAuthor(s): Xinqi Zhu, Zhenghong Gao, Yiming Du, Shixin Cheng, Fang XuAbstractIn this paper, we propose a decomposition-based multi-objective optimization approach considering multiple preferences, expressed by means of reference points, and with robust performance (mprMOEA/D). This algorithm is able to find multiple preferred regions in a single run, and its performance is robust with respect to different problems. The proposed algorithm utilizes a subpopulation (SP) for each reference point to search for the corresponding preferred region. An external population (EP) is maintained to selectively preserve solutions from all the SPs, and it can be revisited when producing new solution for each SP. The proposed collaboration mechanism between the SPs and EP is helpful in convergence and diversity preserving. In order to obtain robust performance, local crossover coordinate systems, which coincide with the local manifold of the Pareto set, are introduced into mprMOEA/D for the crossover operator of differential evolution, alleviating the influence of the overall Pareto set shape. The effects of these adopted techniques on the proposed algorithm are discussed, and the robust performance of the proposed approach is validated using numerical functions in comparison with four existing approaches. Experimental results show that the proposed algorithm outperforms the other algorithms.
       
  • A dynamic multiple populations particle swarm optimization algorithm based
           on decomposition and prediction
    • Abstract: Publication date: Available online 3 September 2018Source: Applied Soft ComputingAuthor(s): Ruochen Liu, Jianxia Li, Jing Fan, Licheng JiaoAbstractDynamic multi-objective optimization problems (DMOPs) involve multiple optimization objectives which are in conflict with one another and change over time or environment. A novel dynamic multiple population particle swarm optimization algorithm based on decomposition and prediction (denoted as DP-DMPPSO) is proposed to solve DMOPs. Each objective is optimized by one population and each population shares their information with other populations. The populations evolve independently using a modified particle swarm optimization (PSO). An external archive is adopted to store the non-dominated solutions selected from all populations in the evolutionary process and the archive will be output as the final solution. A mechanism for updating the archive based on the objective space decomposition (DOS) is proposed. In addition, a population prediction mechanism is employed to accelerate the convergence to the true Pareto front. DP-DMPPSO is tested on a set of benchmark problems and compared with several state-of-the-art algorithms. The results show DP-DMPPSO is highly competitive for solving dynamic multi-objective optimization problems.
       
  • A hybrid hierarchical fault diagnosis method under the condition of
           incomplete decision information system
    • Abstract: Publication date: Available online 3 September 2018Source: Applied Soft ComputingAuthor(s): Yawei Ge, Mingqing Xiao, Zhao Yang, Lei Zhang, Yajun LiangProcessing attribute reduction plays a key role in the fault diagnosis of incomplete decision information system (IDIS), and it improves the efficiency and accuracy of fault diagnosis. Tolerance relation-based attribute reduction is widely used in the IDIS. However, the fuzziness of relation-based classification always exists in the practical attribute reduction problems of fault diagnosis as the incompleteness and uncertainty of data information, and the traditional tolerance relation-based attribute reduction methods are not suitable for fault diagnosis of IDIS. Therefore, this paper proposes a hybrid hierarchical fault diagnosis method with the combination of tolerance relation-based attribute reduction method and integrated logarithmic fuzzy preference programming (LFPP) based methodology. The method utilizes both qualitative and quantitative data information and constructs the hierarchical structure of fault diagnosis in IDIS. The integrated LFPP based methodology obtains the unique normalized optimal significance priorities vector for attribute fuzzy pairwise comparison matrices simultaneously and directly as the sorting part of proposed method. The tolerance relation-based attribute reduction method decomposes the fault attributes reduction problem into multiple sub-problems, which is the decomposing part of proposed method. Hence, the proposed hybrid method can handle the fuzziness of relation-based classification and mitigate complexity attribute reduction for fault diagnosis of IDIS. Finally, an engineering case for strategy reduction of fault diagnosis is provided to demonstrate the feasibility of the proposed method and obtain the reduction diagnosis strategies. Another test case is given for verifying the validity of the reduction results and for comparison between the proposed method and other different methods, which shows that the method is indeed efficient and has greater advantages at producing higher accuracy, reducing difficulty and mitigating complexity in fault diagnosis.Graphical abstractGraphical abstract for this article
       
  • Attribute reduction in incomplete ordered information systems with fuzzy
           decision
    • Abstract: Publication date: Available online 3 September 2018Source: Applied Soft ComputingAuthor(s): Wenbin Qian, Wenhao ShuAbstractRough set theory has been applied extensively to attribute reduction. Classical rough sets provide a theoretical framework for attribute reduction based on complete data with regular attributes where the domains are not ordered by preference. However, their scope does not include incomplete data with fuzzy decisions under a preference-ordered domain, which are common in real real-world applications. Therefore, in this study, a general framework is proposed for attribute reduction from incomplete ordered information systems with fuzzy decisions by combining dominance-based rough sets with α-cut sets, where α is the fuzzy decision attribute value. First, the judgement theorems and discernibility functions are established by applying Boolean reasoning techniques to attribute reduction in consistent and inconsistent incomplete ordered information systems with fuzzy decision. In addition, forward and backward attribute reduction algorithms are designed for consistent and inconsistent systems, respectively, to find near-optimal attribute reducts. Finally, the experimental results based on different datasets, demonstrate that the proposed algorithms are more effective for attribute reduction in most cases than other reduction algorithms.
       
  • An innovative deep architecture for aircraft hard landing prediction based
           on time-series sensor data
    • Abstract: Publication date: Available online 3 September 2018Source: Applied Soft ComputingAuthor(s): Chao Tong, Xiang Yin, Jun Li, Tongyu Zhu, Renli Lv, Liang Sun, Joel J.P.C. RodriguesAbstractThis paper proposes an innovative deep architecture for aircraft hard landing prediction based on Quick Access Record (QAR) data. In the field of industrial IoT, the IoT devices collect IoT data and send these data to the open IoT cloud platform to process and analyze. The prediction of aircraft hard landing is one kind of typical IoT application in aviation field. Firstly, 15 most relevant landing sensor data have been chosen from 260 parameters according to the theory of both aeronautics and feature engineering. Secondly, a deep prediction model based on Long Short-Term Memory (LSTM) have been developed to predict hard landing incidents using the above-mentioned selected sensor data. And then, we adjust the model structure and conduct contrastive experiments. Finally, we use Mean Square Error (MSE) as the evaluation criteria to select the most optimal model. Experimental results prove its better performance with higher prediction accuracy on QAR datasets compared with the state-of-the-art, indicating that this model is effective and accurate for hard landing prediction, which helps to guarantee passengers’ safety and reduce the incidence of landing accidents. Besides, the proposed work is conducive to making an innovation for building and developing the industrial IoT systems in aviation field.
       
  • Exploring the probabilistic graphic model of a hybrid multi-objective
           Bayesian estimation of distribution algorithm
    • Abstract: Publication date: Available online 1 September 2018Source: Applied Soft ComputingAuthor(s): Marcella S.R. Martins, Myriam Delgado, Ricardo Lüders, Roberto Santana, Richard A. Gonçalves, Carolina P. de AlmeidaThe Hybrid Multi-objective Bayesian Estimation of Distribution Algorithm (HMOBEDA) has shown to be very competitive for Many Objective Optimization Problems (MaOPs). The Probabilistic Graphic Model (PGM) of HMOBEDA expands the possibilities for exploration as it provides the joint probability of decision variables, objectives, and configuration parameters of an embedded local search. This work investigates different sampling mechanisms of HMOBEDA, applying the considered approaches to two different combinatorial MaOPs. Moreover, the paper provides a broad set of statistical analyses on its PGM model. These analyses have been carried out to evaluate how the interactions among variables, objectives and local search parameters are captured by the model and how information collected from different runs can be aggregated and explored in a Probabilistic Pareto Front. In experiments, two variants of HMOBEDA are compared with the original version, each one with a different set of evidences fixed during the sampling process. Results for instances of multi-objective knapsack problem with 2 to 5 and 8 objectives show that the best variant outperforms the original HMOBEDA in terms of convergence and diversity in the solution set. This best variant is then compared with five state-of-the-art evolutionary algorithms using the knapsack problem instances as well as a set of MNK-landscape instances with 2, 3, 5 and 8 objectives. HMOBEDA outperforms all of them.Graphical abstractGraphical abstract for this article
       
  • Optimized tool path planning for five-axis flank milling of ruled surfaces
           using geometric decomposition strategy and multi-population harmony search
           algorithm
    • Abstract: Publication date: Available online 1 September 2018Source: Applied Soft ComputingAuthor(s): Jin Yi, Chih-Hsing Chu, Chi-Lung Kuo, Xinyu Li, Liang GaoTool path planning is a key to ensure high machining quality and productivity in 5-axis flank milling of ruled surfaces. Previous studies have shown that optimization-driven tool path planning can effectively reduce the geometrical errors on the finished surface. However, to solve the corresponding optimization problem is a challenging task involving a large number of decision variables. This paper proposes a novel approach to generating optimized tool path for 5-axis flank finishing cut based on a geometric decomposition strategy and multi-population harmony search algorithm. The proposed approach geometrically divides the surface to be machined into a number of segments. The tool paths on those sub-surfaces are independently optimized by the multi-population harmony search algorithm. Individual tool paths are then combined together to form a complete one. The test results of representative surfaces show that the proposed approach produces higher machining precision with less computational time than compared previous methods. And the computational time is further reduced by Message passing interface based parallel computing techniques. A detailed analysis is conducted to characterize how the number of divisions affects the optimization results. And the proposed approach also shows good scalability with the increasing number of cutter locations.Graphical abstractGraphical abstract for this article
       
  • Oscillatory Particle Swarm Optimizer
    • Abstract: Publication date: Available online 1 September 2018Source: Applied Soft ComputingAuthor(s): Haiyan Shi, Shilong Liu, Hongkun Wu, Ruowei Li, Sanchi Liu, Ngaiming Kwok, Yeping PengAbstractThe Particle Swarm Optimization (PSO) algorithm is an attractive meta-heuristic approach for difficult optimization problems. It is able to produce satisfactory results when classical analytic methods cannot be applied. However, the design of PSO was usually based on ad-hoc attempts and its behavior could not be exactly specified. In this work, we propose to drive particle into oscillatory trajectories such that the search space can be covered more completely. A difference equation based analysis is conducted to reveal conditions that guarantee trajectory oscillation and solution convergence. The settings of cognitive and social learning factors and the inertia weight are then determined. In addition, a new strategy in directing these parameters to follow a linearly decreasing profile with a perturbation is formulated. Experiments on function optimizations are conducted and compared to currently available methods. Results have confirmed that the proposed Oscillatory Particle Swarm Optimizer (OSC-PSO) outperforms other recent PSO algorithms using adaptive inertia weights.
       
  • Minimizing the resource consumption of heterogeneous batch-processing
           machines using a copula-based estimation of distribution algorithm
    • Abstract: Publication date: Available online 1 September 2018Source: Applied Soft ComputingAuthor(s): Chuang Liu, Huaping Chen, Rui Xu, Yu WangAbstractThe two-stage flow-shop scheduling problem investigated in this work aims minimize the resource consumption of non-identical job sizes. The flow shop consists of two batch-processing machines (BPMs): a parallel batch BPM and a sequential BPM. The makespan and resource consumption are considered together in this study, the makespan is the constraint condition, and the resource consumption is the objective. A copula-based Estimation of Distribution Algorithm (cEDA) is used to solve the problem. In this study, the individuals are coded by the allocated resource sequences of all jobs in two machines, and the convex resource consumption function is adopted to simulate the relationship between the processing time of the jobs and the resources allocated to the jobs. A Gaussian distribution is adopted as the marginal probabilistic distribution of all the components. The proposed copula function C1 assumes independence among the components, whereas the Clayton copula function C2 assumes that all components are interrelated and introduced for comparison. The computational experiments and comparisons verify the effectiveness of the proposed cEDA. In addition, the copula functions C1 andC2 adopted in the proposed cEDA approach are compared.
       
  • A genetic algorithm based solution to the Minimum-Cost Bounded-Error
           Calibration Tree problem
    • Abstract: Publication date: Available online 31 August 2018Source: Applied Soft ComputingAuthor(s): Hüseyin AkcanAbstractSensors in wireless sensor networks are required to be self-calibrated periodically during their prolonged deployment periods. In calibration planning, employing intelligent algorithms are essential to optimize both the efficiency and the accuracy of calibration. The Minimum-Cost Bounded-Error Calibration Tree (MBCT) problem is a spanning tree problem with two objectives, minimizing the spanning tree cost and bounding the maximum post-calibration skew. The decision version of the MBCT problem is proven to be NP-Complete. In this paper, the GAWES algorithm is presented as a novel genetic algorithm based solution to the optimization version of the MBCT problem. GAWES adopts extreme efficient solution generation within the genetic algorithm to improve the search quality. It is demonstrated through experimentation that GAWES is superior to the existing state of the art algorithm, both in energy efficiency and calibration accuracy.
       
  • Identifying and properly handling context in crowdsourcing
    • Abstract: Publication date: Available online 31 August 2018Source: Applied Soft ComputingAuthor(s): Marcelo Loor, Guy De TréUsing proper social media content in decision making is an ongoing challenge for modern information management. Since such content could be highly subjective, diverse and, thus, hard to process by computer-based algorithms, its truthfulness and suitability are increasingly assessed by people through crowdsourcing services. However, crowdsourced assessments can also be highly subjective. In this paper, we propose a novel method to obtain an approximation of the level to which the contexts of subjective (fuzzy) judgments on social media content are perceived as alike, thereby providing a tool to identify, measure and handle context in crowdsourcing. To compute such an approximation, the proposed method takes two augmented (Atanassov) intuitionistic fuzzy sets (AAIFSs) as inputs, each characterizing the appraisals that a respondent makes on a specific collection of social media posts. The approximation is based on the AAIFS elements corresponding to the appraisals of a specific number of posts, which are deemed to be well fitted (or unfitted) specimens of the concept under analysis. We demonstrate that simulated appraisals support the effectiveness of the proposed method.Graphical abstractGraphical abstract for this article
       
  • Multi-site post-processing of numerical forecasts using a polynomial
           network substitution for the general differential equation based on
           operational calculus
    • Abstract: Publication date: Available online 31 August 2018Source: Applied Soft ComputingAuthor(s): Ladislav ZjavkaPrecise daily forecasts of local wind speed are necessary for planning of the changeable wind power production. Anomalies in local weather cause inaccuracies in daily predictions using meso-scale numerical models. Statistical methods using historical data can adapt the forecasts to specific local conditions. Based on a 2-stage approach of the Perfect Prog method, used routinely in meteorology, the article proposes an enhanced forecast correction procedure with initial estimations of the optimal numbers of training days whose latest data observations are used to elicit daily prediction models. Determination of this main training parameter allows for improvements in the middle-term numerical forecasts of wind speed in the majority of prediction days. Subsequently in the 2nd stage the correction model post-processes numerical forecasts of the training input variables to calculate 24-hour prediction series of the target wind speed at the corresponding time. Differential polynomial network is used to develop the test and post-processing models, which represent the current spatial data relations between the relevant meteorological inputs->output quantities. This innovative machine learning method defines and substitutes for the general linear partial differential equation being able to describe the local atmospheric dynamics which is too complex and uncertain to be represented by standard soft-computing techniques. The complete derivative formula is decomposed into specific sub-solutions of node unknown sum functions in the multi-layer polynomial network structure using Operational Calculus to model the searched separable output function.Graphical abstractGraphical abstract for this article
       
  • Verification of fuzzy UML models with fuzzy Description Logic
    • Abstract: Publication date: Available online 30 August 2018Source: Applied Soft ComputingAuthor(s): Fu Zhang, Jingwei ChengThe fuzzy UML model has been introduced to enable the conceptual modeling of imprecise data in many applications. How to prevent, detect, and correct errors as early as possible in the modeling process by verifying the correctness of fuzzy UML models is desirable. But it is difficult to manually verify the correctness of the models. Thanks to the effective reasoning service of fuzzy Description Logics (DLs), they are considered as natural candidates for supporting the verification of fuzzy UML models. In this paper we propose a complete and practical approach for verifying fuzzy UML models with fuzzy DLs. In detail, we first propose an approach for transforming a fuzzy UML model into a fuzzy DL knowledge base. Then we further reduce the verification tasks of the fuzzy UML model to reasoning problems of the transformed fuzzy DL knowledge base. Also, the correctness of the approach is proved, and a running example is provided to explain the approach in detail. Final we implemented a prototype that can transform fuzzy UML models into fuzzy DL knowledge bases. Case studies show that the proposed approach is feasible and the prototype translator actually works. By means of our approach and tool, the transformed fuzzy DL knowledge base and reasoning tasks can be input into the existing fuzzy DL reasoners, and the reasoning results reported by the reasoners can be passed back to the designer to improve the quality of the fuzzy UML model.Graphical abstractGraphical abstract for this article
       
  • A multi-start iterated local search algorithm for the uncapacitated single
           allocation hub location problem
    • Abstract: Publication date: Available online 30 August 2018Source: Applied Soft ComputingAuthor(s): Jian Guan, Geng Lin, Hui-Bin FengAbstractThe uncapacitated single allocation hub location problem (USAHLP) is a particular variant of the hub location problem, which has broad applications in the transportation network. The objective of USAHLP is to minimize the sum of transportation cost and fixed cost. In this paper, a multi-start iterated local search algorithm (MSLSA) is proposed for solving the USAHLP. Firstly, a randomized greedy construction procedure is built to generate initial solutions with good quality. Then, solutions are improved by an iterated local search, which consists of promising solutions selection and improvement on nodes reallocation. Meanwhile, a perturbation mechanism helps the search to escape from local optima and explore new promising regions. Our proposed algorithm is evaluated on four USAHLP data sets and compared with two state-of-the-art algorithms. The experimental results demonstrate the high competitiveness of our proposed MSLSA in terms of both solution quality and computational efficiency. MSLSA achieves the same solutions for multiple runs on each instance, which highlights the advantage of MSLSA in terms of solution robustness and stability. With the high solution quality and the rapid running speed, the proposed algorithm could be a promising tool for other hub location problems.
       
  • Constraint-handling techniques in surrogate-assisted evolutionary
           optimization. An empirical study
    • Abstract: Publication date: Available online 30 August 2018Source: Applied Soft ComputingAuthor(s): Mariana-Edith Miranda-Varela, Efrén Mezura-MontesMany real-world constrained numerical optimization problems are defined through expensive models. An alternative to solve them is by using evolutionary algorithms. However, they usually require a considerable number of evaluations to get a competitive solution, and such task needs a high computational effort. This disadvantage can be tackled with surrogate models. In this paper, a study of the relationship between a surrogate model and different constraint-handling techniques is presented. The surrogate model (k nearest neighbors regression, kNN regression) is assembled into differential evolution with combined variants (DECV), named surrogate-assisted DECV (SA-DECV) to approximate the objective function value and the sum of constraint violation. Additionally, four SA-DECV variants with different constraint-handling techniques are assessed. The best variant is compared against four state-of-the-art algorithms. Twenty-four well-known test problems are solved in the experiments. According to the results, the best variant is the one that uses a ranking process as constraint-handling technique. Furthermore, a case of study is included, where a set of engineering optimization problems is solved with the best variant. Finally, the surrogate model is tested in other DE-based algorithms.Graphical abstractGraphical abstract for this article
       
  • Deep multi-view representation learning for social images
    • Abstract: Publication date: Available online 30 August 2018Source: Applied Soft ComputingAuthor(s): Feiran Huang, Xiaoming Zhang, Zhonghua Zhao, Zhoujun Li, Yueying HeAbstractMulti-view representation learning for social images has recently made remarkable achievements in many tasks, such as cross-view classification and cross-modal retrieval. Since social images usually contain link information besides the multi-modal contents (e.g., text description, and visual content), simply employing the data content may result in sub-optimal multi-view representation of the social images. In this paper, we propose a Deep Multi-View Embedding Model (DMVEM) to learn joint embeddings for the three views including the visual content, the associated text descriptions, and their relations. To effectively encode the link information, a weighted relation network is built based on the linkages between social images, which is then embedded into a low dimensional vector space using the Skip-Gram model. The learned vector is regarded as the third view besides the visual content and text description. To learn a joint representation from the three views, a deep learning model with three-branch nonlinear neural network is proposed. A three-view bi-directional loss function is used to capture the correlation between the three views. The stacked autoencoder is adopted to preserve the self-structure and reconstructability of the learned representation for each view. Comprehensive experiments are conducted in the tasks of image-to-text, text-to-image, and image-to-image searches. Compared to the state-of-the-art multi-view embedding methods, our approach achieves significant improvement of performance.
       
  • A novel multi-objective co-evolutionary algorithm based on decomposition
           approach
    • Abstract: Publication date: Available online 30 August 2018Source: Applied Soft ComputingAuthor(s): Zhengping Liang, Xuyong Wang, Qiuzhen Lin, Fei Chen, Jianyong Chen, Zhong MingRecently, co-evolution mechanism is exploited to solve multi-objective optimization problems by using multiple subpopulations on a cooperative manner, such as co-evolutionary multi-swarm particle swarm optimization (CMPSO) based on the multiple subpopulations for multi-objective optimization (MPMO) framework, which is also extended to cooperative multi-objective differential evolution (CMODE). Although their approaches of optimizing each objective with each subpopulation are effective, the evolution and selection methods conducted on external archive are also important for co-evolution, as they significantly impact a lot on the quality and the distribution of final solutions. In this paper, we present a novel multi-objective co-evolutionary algorithm based on decomposition approach (MCEA), also using the subpopulation to enhance each objective. A more powerful DE operator with an adaptive parameters control is run on both of multiple subpopulations and external archive, which helps to improve each objective and diversify the tradeoff solutions on external archive. Moreover, computational resource assignment is also realized between each subpopulation and external archive. Once one objective stops to be enhanced, this objective may find its optimal value and more computational resource can be assigned to evolve other objectives and external archive. By this way, the tradeoff between all the objectives can be well balanced and external archive also has more opportunities for evolution. After evaluating the proposed algorithm on 31 benchmark test problems, such as the ZDT, DTLZ, WFG, UF series problems, the experimental results show that MCEA presents some advantages over two co-evolutionary algorithms (i.e., CMPSO and CMODE) and several state-of-the-art multi-objective evolutionary algorithms (i.e., NSGA-II, SPEA2, MOEA/D-DE, MOEA/D-STM, MOEA/D-FRRMAB and MOEA/D-IR).Graphical abstractGraphical abstract for this article
       
  • Artificial ants and fireflies can perform colour quantisation
    • Abstract: Publication date: Available online 30 August 2018Source: Applied Soft ComputingAuthor(s): María-Luisa Pérez-DelgadoSeveral methods based on the behaviour of biological systems have been proposed during the last years to solve a wide range of problems. Ant algorithms and Firefly algorithms are two methods of this type.The Ant-Tree for Colour Quantisation algorithm was recently proposed to quantise colour images. It is a clustering-based method which creates a tree structure where the pixels of the image are grouped in subtrees according to their similarity. A threshold value determines if the similarity is sufficient to associate a pixel to a subtree. The computation of that threshold includes a parameter in (0,1], whose value must be set taking into account the number of colours of the quantised image. Since this parameter influences the quality of the final image, it is recommended to perform several tests with different values of that parameter and then select the best result. In order to reduce the influence of said parameter and improve the resulting image, this paper proposes to combine the Ant-Tree for Colour Quantisation algorithm with the Firefly algorithm.Computational experiments show that the new method generates better images than some popular colour quantisation methods such as Median-cut, Octree, Neuquant, Wu’s method or the Variance-based method.Graphical abstractGraphical abstract for this article
       
  • A formal approach for matching and ranking trustworthy context-dependent
           services
    • Abstract: Publication date: Available online 30 August 2018Source: Applied Soft ComputingAuthor(s): Afnan Bawazir, Wadee Alhalabi, Mubarak Mohamed, Akila Sarirete, Ammar AlsaigService Oriented Computing (SOC) is being utilized and relied on by the enterprise application development and distributed computing. The three fundamental interacting components in SOC are service providers, service requesters, and registries. One of the challenges that face the requester when choosing a service is to find the trustworthy service which satisfies and serves the requester and provider contexts. This is due to the increasing number of services in registries. Thus, there is a need for a ranking approach which takes into account both the rich features and the context of service requester and provider, in order to improve the applicability of a top-ranking result. By including the trustworthiness requirements in the ranking process for the purpose of providing a reliable service with respect to the requester’s preferences, we put forward a generic structure and framework for matching and ranking trustworthy context-dependent services. We based our framework on the logic and set theory, and defined a formal description of the mentioned services. Consequently, we assessed our suggested architecture using a real-world case study. The study presents the effectiveness of the suggested ranking model architecture for finding the most suitable trustworthy service according to user preferences while considering the context information. The proposed framework can be utilized in several application domains.Graphical abstractGraphical abstract for this article
       
  • A probability-based coevolving multi-objective algorithm for antenna array
           synthesis
    • Abstract: Publication date: Available online 29 August 2018Source: Applied Soft ComputingAuthor(s): Anqi Pan, Lei Wang, Weian Guo, Qidi WuAbstractThe synthesis of the antenna is a multi-objective problem which contains large-scale decision variables including but not limited to the elements’ locations, amplitudes, and phases. Numerous researches have introduced heuristic algorithms on the synthesis optimization, and some of them employed multi-objective strategies which are proved to provide better optimal solutions for deciders. Moreover, since more complex relationships and a larger number of decision variables have appeared in nowadays array antennas, the study simultaneously considers high-dimensional decision space, as well as the multiple requirements is essential for highly directive patterns. However, no existing literature can be found in the corresponding field to the best of our knowledge. In this paper, a probability-based coevolving particle swarm optimization (PCMOPSO) is proposed for multi-objective optimization. In PCMOPSO, decision variables are allocated into subgroups based on the cooperative coevolution framework, and then optimized through the probability-based learning strategy to accelerate convergence simultaneously with maintaining diversity. Meanwhile, a grouping penalty (GP) technique is proposed to improve the grouping technique in PCMOPSO. The proposed algorithm has been tested for efficacy on several benchmark functions under different decision dimensions. The experimental results exhibit that PCMOPSO has superior performance relative to other similar methods. Furthermore, PCMOPSO is applied to optimize the array parameters under several antenna synthesis environments, including large array design cases. Significant improvement was obtained compared to other state-of-the-art multi-objective algorithms and known optimal solutions in the literature.
       
  • A new approach for reject inference in credit scoring using kernel-free
           fuzzy quadratic surface support vector machines
    • Abstract: Publication date: Available online 29 August 2018Source: Applied Soft ComputingAuthor(s): Ye Tian, Ziyang Yong, Jian LuoAbstractCredit scoring models have offered benefits to lenders and borrowers for many years. However, in practice these models are normally built on a sample of accepted applicants and fail to consider the remaining rejected applicants. This may cause a sample bias which is an important statistical issue, especially in the online lending situation where a large proportion of requests are rejected. Reject inference is a method for inferring how rejected applicants would have behaved if they had been granted and incorporating this information in rebuilding a more accurate credit scoring system. Due to the good performances of SVM models in this area, this paper proposes a new approach based on the state-of-the-art kernel-free fuzzy quadratic surface SVM model. It is worth pointing out that our method not only performs very well in classification as some latest works, but also handles some big issues in the classical SVM models, such as searching proper kernel functions and solving complex models. Besides, this paper is the first one to eliminate the bad effect of outliers in credit scoring. Moreover, we use two real-world loan data sets to compare our method with some benchmark methods. Particularly, one of the data set is very valuable for the study of reject inference, because the outcomes of rejected applicants are partially known. Finally, the numerical results strongly demonstrate the superiority of the proposed method in applicability, accuracy and efficiency.
       
  • Speckle de-noising of clinical ultrasound images based on fuzzy spel
           conformity in its adjacency
    • Abstract: Publication date: Available online 29 August 2018Source: Applied Soft ComputingAuthor(s): Rahul Roy, Susmita Ghosh, Ashish GhoshThis article presents a novel speckle de-noising technique leveraging the properties of fuzzy set theory for preserving context of the echo-texture and dealing with vagueness and uncertainty in ultrasound images. This work is inspired from the theory of Michailovich and Tannenbaum (2006) where it was shown that the log transformation of clinical images converts the speckle noise to white Gaussian noise (WGN) with outliers. In this context, we propose to apply a non-linear transformation (using functions with horizontal asymptotes) to speckle noise which would map the outlier to the asymptote of the function. This would result in suppression of the outliers and thereby reducing the problem to cancellation of WGN. We adhere to fuzzy set theory for defining such transformations as they can be well characterized with vague linguistic terms. In this work, a property scene is generated using a membership function which has the ability to suppress the outliers. Thereafter, the property scene is updated such that the membership of belonging of the spels in a neighborhood is in agreement with each other to the defined property. This, in turn, will restore the homogeneity in the neighborhood of the image when the property scene is de-fuzzified back to image space. To measure the compliance of a neighborhood spel, a concept of degree of conformity is introduced. This degree of conformity determines the contribution of the neighboring spel in the update process of the property scene. To elicit the effectiveness of the proposed technique, comparative analysis has been done with five state-of-the-art techniques on in silico images using five standard measures and in vivo ultrasound images with two performance indices. Moreover, to show the impact of the de-noising algorithm on performance, segmentation based investigation is carried out on 102 ultrasound images of carotid artery. A robustness analysis with increasing level of noise is also investigated. The performance of segmentation is reported in a box plot of precision and recall. Qualitative and quantitative analysis reveal the promising performance of the proposed technique.Graphical abstractGraphical abstract for this article
       
  • Backtracking search integrated with sequential quadratic programing for
           nonlinear active noise control systems
    • Abstract: Publication date: Available online 28 August 2018Source: Applied Soft ComputingAuthor(s): Wasim Ullah Khan, ZhongFu Ye, Naveed Ishtiaq Chaudhary, Muhammad Asif Zahoor RajaAbstractIn the present work, integrated strength of backtracking search algorithm (BSA) and sequential quadratic programming (SQP) is exploited for nonlinear active noise control (ANC) systems. Legacy of approximation theory in mean squared sense is utilized to construct a cost function for ANC system based on finite impulse response (FIR) and Volterra filtering procedures. Global search efficacy of BSA aided with rapid local refinements with SQP is practiced for effective optimization of fitness function for ANC systems having sinusoidal, random and complex random signals under several variants based on linear/nonlinear and primary/secondary paths. Statistical observations demonstrated the worth of stochastic solvers BSA and BSA-SQP by means of accuracy, convergence and complexity indices.
       
  • An improved genetic algorithm for structural optimization of Au-Ag
           bimetallic nanoparticles
    • Abstract: Publication date: Available online 28 August 2018Source: Applied Soft ComputingAuthor(s): Guifang Shao, Yali Shangguan, JiPing Tao, Jiwen Zheng, Tundong Liu, Yuhua WenThe structures are crucial for bimetallic nanoparticles (NPs) because they can determine their unique physical and chemical properties. Therefore, structures optimization of bimetallic NPs from theoretical calculation is of increasing importance for understanding their stabilities and catalytic performance. In this article, an improved genetic algorithm (IGA) is proposed to systematically investigate the structural stabilities of Au-Ag NPs. In the IGA, a layered coordinate ranking method is adopted to enhance the structural stability during initialization. Meanwhile, a difference transition fitness function is introduced to keep the population diversity and preserve the best individual of IGA. Furthermore, for improving the global searching ability and local optimization speed, a sphere-cut-splice crossover is employed to replace the classical plane-cut-splice crossover in general genetic algorithm. The performance of IGA has been compared with Monte Carlo simulation method and particle swarm optimization algorithm, the results reveal our algorithm possesses superior convergence and stability.Graphical abstractGraphical abstract for this article
       
  • Stacked autoencoder with echo-state regression for tourism demand
           forecasting using search query data
    • Abstract: Publication date: Available online 28 August 2018Source: Applied Soft ComputingAuthor(s): Sheng-Xiang Lv, Lu Peng, Lin WangAccurate tourism demand forecasting is fundamental in the tourism industry, while effective tourism demand forecasting using search query data (SQD) has become popular in the tourism management field. SQD is a type of statistical time series provided by search engines that can reflect netizens’ attention on certain events. Scholars attempt to establish a reasonable relationship between tourism demand and SQD for its timeliness and comprehensiveness. The current study proposes an effective deep learning technique called stacked autoencoder with echo-state regression (SAEN) to accurately forecast tourist flow based on search query data. In the proposed SAEN approach, stacked autoencoder is adopted to hierarchically learn high-level predictive indicators from substantial SQD and connected by an echo-state regression layer to model the nonlinear time series relationship between tourism flow and the learned indicators. Four realistic applications (i.e., one comparative case and three extended cases in the US and China with different SQD sources) are used to verify the forecasting performance of SAEN. Numerical results indicate that SAEN is better than the current literature findings, including time series approach, econometric model, common machine learning algorithms, and state-of-the-art deep learning techniques. The structure parameters of SAEN are further analyzed empirically and theoretically. Moreover, this study determined a different impact of network depth and echo-state reservoir scale on the performance of SAEN. The proposed SAEN can be an appropriate alternative for tourism demand forecasting in complex data situations.Graphical abstractGraphical abstract for this article
       
  • A new method for identification of fuzzy models with controllability
           constraints
    • Abstract: Publication date: Available online 28 August 2018Source: Applied Soft ComputingAuthor(s): Leonel Gutiérrez, Diego Muñoz-Carpintero, Felipe Valencia, Doris SáezAbstractTakagi-Sugeno fuzzy models are cataloged as universal approximators and have been proven to be a powerful tool for the prediction of systems. However, in certain cases they may fail to inherit the main properties of a system which may cause problems for control design. In particular, a non-suitable model can generate a loss of closed-loop performance or stability, especially if that model is not controllable. Therefore, ensuring the controllability of a model to enable the computation of appropriate control laws to bring the system to the desired operating conditions. Therefore, a new method for identification of fuzzy models with controllability constraints is proposed in this paper. The method is based on the inclusion of a penalty component in the objective function used for consequence parameter estimation, which allows one to impose controllability constraints on the linearized models at each point of the training data. The benefits of the proposed scheme are shown by a simulation-based study of a benchmark system and a continuous stirred tank: the stability and the closed-loop performances of predictive controllers using the models obtained with the proposed method are better than those using models found by classical and local fuzzy identification schemes.
       
  • Adaptive predator–prey optimization for tuning of infinite horizon LQR
           applied to vehicle suspension system
    • Abstract: Publication date: Available online 9 July 2018Source: Applied Soft ComputingAuthor(s): Rashmi Ranjan Das, Vinodh Kumar Elumalai, Raaja Ganapathy Subramanian, Kadiyam Venkata Ashok KumarAbstractThis paper puts forward an adaptive predator–prey optimization algorithm to solve the weight selection problem of linear quadratic control applied for vibration control of vehicle suspension system. The proposed technique addresses the two key issues of PSO, namely (a) the premature convergence of the particles, and (b) the imbalance between exploration and exploitation of the particles in finding the global optimum. The main principle behind this optimization algorithm is that the inertia weight is adaptively updated based on the success rate of the particles to increase the convergence, and the predator–prey strategy is reinforced to avoid the particles getting trapped in a local minimum thereby, guaranteeing convergence of the particles towards the global optimal solution. The convergence of the particles towards the global minimum is guaranteed on the basis of a passivity argument. Moreover, the strength of this new adaptive optimization technique to tune the gains of linear quadratic regulator is validated experimentally on a laboratory scale active vehicle suspension system for improved ride comfort and passenger safety.
       
  • Hybrid meta-heuristic algorithms for independent job scheduling in grid
           computing
    • Abstract: Publication date: Available online 26 May 2018Source: Applied Soft ComputingAuthor(s): Muhanad Tahrir Younis, Shengxiang YangAbstractThe term ‘grid computing’ is used to describe an infrastructure that connects geographically distributed computers and heterogeneous platforms owned by multiple organizations allowing their computational power, storage capabilities and other resources to be selected and shared. The job scheduling problem is recognized as being one of the most important and challenging issues in grid computing environments. This paper proposes two strongly coupled hybrid meta-heuristic schedulers. The first scheduler combines Ant Colony Optimization and Variable Neighbourhood Search in which the former acts as the primary algorithm which, during its execution, calls the latter as a supporting algorithm, while the second merges the Genetic Algorithm with Variable Neighbourhood Search in the same fashion. Several experiments were carried out to analyze the performance of the proposed schedulers in terms of minimizing the makespan using well known benchmarks. The experiments show that the proposed schedulers achieved impressive results compared to other selected approaches from the bibliography.
       
  • Fuzzy GRASP with path relinking for the Risk-constrained Cash-in-Transit
           Vehicle Routing Problem
    • Abstract: Publication date: Available online 21 May 2018Source: Applied Soft ComputingAuthor(s): Nina Radojičić, Aleksandar Djenić, Miroslav MarićAbstractThis paper considers a special case of famous vehicle routing problem with additional risk constraints, called the Risk-constrained Cash-in-Transit Vehicle Routing Problem (RCTVRP). We propose a fuzzy GRASP (Greedy Randomized Adaptive Search Procedure) hybridized with path relinking (PR) methodology for solving the RCTVRP. Introduced PR structure, which can be used for other vehicle routing problems, is implemented. To make the algorithm's time complexity smaller, new data structure for the RCTVRP is incorporated. Proposed fuzzy GRASP with PR hybrid shows better computational performance compared to its non-fuzzy version. Furthermore, computational results on publicly available data sets indicate that proposed algorithm outperforms all existing methods from the literature for solving the RCTVRP.
       
  • Physics of failure-based reliability prediction of turbine blades using
           multi-source information fusion
    • Abstract: Publication date: Available online 19 May 2018Source: Applied Soft ComputingAuthor(s): He Li, Hong-Zhong Huang, Yan-Feng Li, Jie Zhou, Jinhua MiFatigue and fracture of turbine blades are fatal to aero engines. Reliability prediction of aero engines is indispensable to guarantee their safety. For turbine blades of aero engines, most recent research works only focus on the number of cycles and excavate information from a single source. To remove these limitations, a Physics of failure-based reliability prediction method using multi-source information fusion has been developed in this paper to predict the reliability of turbine blades of aero engines. In the proposed method, the fuzzy theory is employed to represent uncertainties involved in prediction. Case studies of reliability prediction under fuzzy stress with and without fuzzy strength are conducted by using a dynamic stress-strength interference model which takes types of cycles of aero engines into consideration. Results indicate that the proposed method is better in line with engineering practice and more flexible in decision making and it can predict the reliability of aero engine turbine blades to be an interval by utilizing the proposed linear fusion algorithm. In addition, the predicted interval contains results that are predicted by other commonly used information fusion methods Hence, the proposed method conduces to remove confusion made by selection of multiple methods.Graphical abstractGraphical abstract for this article
       
  • Interval quantile regression models based on swarm intelligence
    • Abstract: Publication date: Available online 5 May 2018Source: Applied Soft ComputingAuthor(s): Yanne M.G. Soares, Roberta A.A. FagundesAbstractThis paper presents quantile regression models for classical data and interval symbolic data using algorithms based on swarm intelligence to estimate the parameters aiming to improve the model performance. Also, these methods are compared with methods of estimation based on linear programming widely used in the literature. Applications using real and simulated data are considered. The prediction quality is assessed by the mean magnitude of relative error calculated from test data. The growth of symbolic data nature of alerts to the need of the new statistical methodologies development for the treatment of this type of information. The results show that the proposed models are effective alternatives for optimizing the choice of parameters of the quantile regression, providing greater precision and robustness than models based on linear programming.
       
  • Construction of prediction intervals using adaptive neurofuzzy inference
           systems
    • Abstract: Publication date: Available online 4 May 2018Source: Applied Soft ComputingAuthor(s): Bara Miskony, Dianhui WangPoint forecasting suffers from its poor interpretation respects in cases of the existence of uncertainties associated with the data or instability in the system. Prediction intervals (PIs) can cope with these deficiencies and can qualify the level of uncertainty related with point predictions. In this paper, the well-known adaptive neuro-fuzzy inference systems (ANFIS) are employed as learner models to construct PIs with a randomized algorithm. Two ANFIS models are independently built to produce the lower-bound and upper-bound of PIs, respectively. The obtained results with comparisons over six datasets demonstrate that our proposed algorithm performs positively in terms of both coverage rate and specificity. The proposed algorithm is also applied for a real-world application in energy science, and the experimental results show its applicability to construct PIs with satisfactory performance.Graphical abstractGraphical abstract for this article
       
  • Error convergence analysis of the SUFIN and CSUFIN
    • Abstract: Publication date: Available online 7 April 2018Source: Applied Soft ComputingAuthor(s): José de Jesús RubioSequential update fuzzy inference network (SUFIN), and changed sequential update fuzzy inference network (CSUFIN) are two evolving intelligent algorithms utilized for the modelling in prognostic health management plants. In this research, error convergence of the SUFIN and CSUFIN is analyzed. SUFIN utilizes the extended Kalman filter, while CSUFIN uses the gradient descent technique. First, proposed algorithms are linearized to get their modelling dynamic equations. Second, Lyapunov strategy is utilized to ensure the error convergence of studied networks. Two examples show the performance of advised algorithms.Graphical abstractGraphical abstract for this article
       
  • A unique feature extraction using MRDWT for automatic classification of
           abnormal heartbeat from ECG big data with Multilayered Probabilistic
           Neural Network classifier
    • Abstract: Publication date: Available online 5 April 2018Source: Applied Soft ComputingAuthor(s): Hari Mohan Rai, Kalyan ChatterjeeThis paper employs a novel adaptive feature extraction techniques of electrocardiogram (ECG) signal for detection of cardiac arrhythmias using multiresolution discrete wavelet transform from ECG big data. In this paper, five types ECG arrhythmias including normal beats have been classified. The MIT-BIH database of 48 patient records is utilized for detection and analysis of cardiac arrhythmias. Proposed feature extraction utilizes Daubechies as wavelet function and extracts 21 feature points which include the QRS complex of ECG signal. The Multilayered Probabilistic Neural Network (MPNN) classifier is proposed as the best-suited classifier for the proposed feature. Total 1700 ECG betas were tested using MPNN classifier and compared with other three classifiers Back Propagation (BPNN), Multilayered Perceptron (MLP) and Support Vector Machine (SVM). The system efficiency and performance have been evaluated using seven types of evaluation criteria: precision (PR), F-Score, positive predictivity (PP), sensitivity (SE), classification error rate (CER) and specificity (SP). The overall system accuracy, using MPNN technique utilizing the proposed feature, obtained is 99.53% whereas using BPNN, MLP and SVM provide 97.94%, 98.53%, and 99%. The processing time using MPNN classifier is only 3 s which show that the proposed techniques not only very accurate and efficient but also very quick.Graphical abstractGraphical abstract for this article
       
  • An integrated approach for failure mode and effects analysis based on
           fuzzy best-worst, relative entropy, and VIKOR methods
    • Abstract: Publication date: Available online 26 March 2018Source: Applied Soft ComputingAuthor(s): Zhang-peng Tian, Jian-qiang Wang, Hong-yu ZhangIn general, analysis of failure modes and their effects requires a group of experts to tackle substantial uncertainties associated with the risk evaluation process. To date, to overcome one or more of the uncertainty-related issues, an increasing number of failure mode and effects analysis (FMEA) models based on multi-criteria decision-making (MCDM) methods have been developed. However, most of the improvements have not cautiously considered the process of assigning importance weights to risk factors and FMEA team members during FMEA. This study aims to enhance the performance of the classic FMEA and to propose an integrated fuzzy MCDM approach for FMEA. First, a fuzzy best-worst method is used to obtain the weights of risk factors. Second, an integrated structure based on fuzzy proximity and fuzzy similarity entropy weights is developed to obtain the weights of FMEA team members with respect to different risk factors. Finally, a fuzzy VIKOR (VIsekriterijumska optimizacija i KOm-promisno Resenje) approach is employed to obtain the risk priorities of failure modes. The applicability and effectiveness of the proposed approach is validated through an illustrative example concerning risk analysis of a grinding wheel system. The results of sensitivity and comparative analyses show that the proposed approach is valid and can provide valuable and effective information in assisting risk management decision-making.Graphical abstractGraphical abstract for this article
       
  • Conditional predictive Bayesian Cramér-Rao Lower Bounds for
           prognostic algorithms design
    • Abstract: Publication date: Available online 10 February 2018Source: Applied Soft ComputingAuthor(s): David E. Acuña, Marcos E. Orchard, Raimundo J. SaonaSystem states are related, directly or indirectly, to health condition indicators. Indeed, critical system failures can be efficiently characterized through a state space manifold. This fact has encouraged the development of a series of failure prognostic frameworks based on Bayesian processors (e.g. particle or unscented Kalman filters), which efficiently help to estimate the Time-of-Failure (ToF) probability distribution in nonlinear, non- Gaussian, systems with uncertain future operating profiles. However, it is still unclear how to determine the efficacy of these methods, since the Prognostics and Health Management (PHM) community has not developed rigorous theoretical frameworks that could help to define proper performance indicators. In this regard, this article introduces novel prognostic performance metric based on the concept of Bayesian Cramér-Rao Lower Bounds (BCRLBs) for the predicted state mean square error (MSE), which is conditional to measurement data and model dynamics; providing a formal mathematical definition of the prognostic problem. Furthermore, we propose a novel step-by-step design methodology to tune prognostic algorithm hyper-parameters, which allows to guarantee that obtained results do not violate fundamental precision bounds. As an illustrative example, both the predictive BCRLB concept and the proposed design methodology are applied to the problem of End-of-Discharge (EoD) time prognostics in lithium-ion batteries.Graphical abstractGraphical abstract for this article
       
  • Adaptive neuro-fuzzy inference system for deburring stage classification
           and prediction for indirect quality monitoring
    • Abstract: Publication date: Available online 31 January 2018Source: Applied Soft ComputingAuthor(s): Wahyu Caesarendra, Tomi Wijaya, Tegoeh Tjahjowidodo, Bobby K. Pappachan, Arthur Wee, Muhammad Izzat RoslanManufacturing of aerospace components consists of combination of different types of machining, finishing, and measuring processes. One of the finishing processes is deburring, i.e. a finishing process to remove burrs from work coupons after a boring hole process. Deburring is conducted to achieve required surface finish quality prior to further processes in assembly line. This paper introduces sensor data analysis as a tool to quantify and correlate the deburring stage with the features extracted from sensors data. This study covers signal processing, feature extraction and analytical method to determine its relevancy to the surface finish quality from deburring process. Wavelet decomposition and Welch’s spectrum estimate is used as a signal processing and feature extraction method. Consequently, the features are used as the basis for analysis by adaptive neuro-fuzzy inference system (ANFIS). The ANFIS yields the output corresponding to the predicted surface finish quality in terms of boss hole chamfer length and the stage classification of deburring process. The results show a decreasing trend in measured vibration signal, which is qualitatively well correlated to the deburring stage and the development of chamfer length during deburring process.Graphical abstractGraphical abstract for this article
       
  • Deep diagnostics and prognostics: An integrated hierarchical learning
           framework in PHM applications
    • Abstract: Publication date: Available online 31 January 2018Source: Applied Soft ComputingAuthor(s): Yanhui Lin, Xudong Li, Yang HuPrognostics and Health Management (PHM) is an integrated technique for improving the availability and efficiency of high-value industry equipment and reducing the maintenance cost. One of the most challenging problems in PHM is how to effectively process the raw monitoring signal into the information-rich features that are readable enough for PHM modeling. In this paper, we propose an integrated hierarchical learning framework, which is capable to perform the unsupervised feature learning, diagnostics and prognostics modeling together. The proposed method is based on Auto-Encoders (trained by considering the L1-norm penalty) and Extreme Learning Machines (trained by considering the L2-norm penalty). The proposed method is applied on two different case studies considering the diagnostics of motor bearings and prognostics of turbofan engines, also the performances are compared with other commonly applied PHM approaches and machine learning tools. The obtained results demonstrate the superiority of the proposed method, especially the ability of extracting the relevant features from the non-informative and noisy signals and maintaining their efficiencies.Graphical abstractGraphical abstract for this article
       
  • Adaptive large neighborhood search applied to the design of electronic
           circuits
    • Abstract: Publication date: Available online 22 August 2018Source: Applied Soft ComputingAuthor(s): Vinicius Gandra Martins Santos, Marco Antonio Moreira de CarvalhoAbstractIn this paper, a new algorithm is proposed for solving the Gate Matrix Layout Problem (GMLP). This combinatorial problem is NP-Hard and aims to determine the physical layout of the components of an electronic circuit in order to minimize the number of tracks required to connect its nets. By reducing the circuit area, it is possible to reduce manufacturing costs and also improve circuit performance. Computer-aided design of such layouts has direct practical applications in engineering and industry, including information technology, industrial processes automation, and consumer goods production. We propose new local search procedures combined in an Adaptive Large Neighborhood Search (ALNS) metaheuristic to generate solutions for the GMLP. To assess the quality of the proposed method, we have considered 1,455 real-world and artificial instances from the literature and compared the proposed ALNS with the state-of-the-art method for the GMLP solution. The ALNS performance is robust as it matches 89% of known optimal solutions and also improves the best known results in some instances.
       
  • Smarten Up Computational Intelligence to Decipher Time Series Data
    • Abstract: Publication date: Available online 17 August 2018Source: Applied Soft ComputingAuthor(s): Shinq-Jen Wu, Cheng-Tao WuThere are dramatic increases in the threats from out-of-control diseases such as cancer. The principle solution is to obtain a good prediction of the dynamic behavior of the underlying systems to control the systems. An emergent challenge is to develop more effective and efficient reverse engineering technologies. In this study, we propose a smarten-up differential evolution (sDE) and a heuristically-deviated local search (hLS) to solve this issue. Premature convergence and the insufficiency in exploitation for complex systems limit the potential of differential evolution to decipher time series data. Since the spirit of DE is on introducing individual differences as a directed searching deviation. We reinforce the evolutionary variation between the winner and other members and also in these members. The idea is implemented with succeeded exploiting searching (a united locally variant search rule for the best individual to achieve efficient exploitation quite rapidly), differential mutation (a more flexible mutation strategy to strengthen the differential evolution), and a flexible two-way migration. Additionally, the insufficiency in globally searching over a large range is a critical issue for various gradient-based methods. We here propose a heuristically-deviated scheme to allow the search to succeed at being widened (from a tangent to a region, to a large range and further to a pop-jumping deviation). Three diverting operations (population-toward, random-toward and popping-diverse differentiation) ensure that the move of hLS is a method for achieving a valid escape in a limited amount of time. Simulation tests for S-systems show that almost perfect results are obtained even when learning starts at a random poor point in a wide search space (> 99.96% accuracy for a kinetic order range of [-100, 100] with 80-neighborhood starting points). A perfect prediction of Michaelis-Menten systems shows the potential of hLS in global-searching robustness. We have an additional discussion on long-period dense-sample, short-period sparse-sample and general-range cases for learning-range robustness (> 99.97% average accuracy for 21 sample points), and propose a criterion for setting up a new experiment. These results demonstrate that both sDE and hLS are able to remain/achieve a diverse search and stay flexible in jumping from an attractor.Graphical abstractGraphical abstract for this article
       
  • Meta Fuzzy Functions: Application of Recurrent Type-1 Fuzzy Functions
    • Abstract: Publication date: Available online 15 August 2018Source: Applied Soft ComputingAuthor(s): Nihat TakAbstractThe main objective of meta-analysis is to aggregate the results of multiple scientific studies on a specific topic. Instead of aggregating the results of different studies, different methods are aggregated with the help of fuzzy c-means clustering algorithm in the proposed method. Meta fuzzy functions are introduced in the paper. The idea of meta fuzzy functions is to aggregate the methods which are proposed for the same purpose; forecasting, prediction, etc. The study aggregates the models for the same method under different parameter specifications rather than aggregating different methods. Recently, recurrent type-1 fuzzy functions are introduced as an alternative forecasting method. The main advantages of recurrent type-1 fuzzy functions are that they are free of assumptions and rules. There are three parameters to be adjusted for recurrent type-1 fuzzy functions; the number of lags for AR(p), the number of lags for MA(q), and the number of clusters. The models for recurrent type-1 fuzzy functions with different parameter specifications are aggregated in the paper. The results show that it is possible to increase the forecasting performances of recurrent type-1 fuzzy functions in terms of both RMSE and MAPE.
       
  • An experimental evaluation of weightless neural networks for multi-class
           classification
    • Abstract: Publication date: Available online 13 August 2018Source: Applied Soft ComputingAuthor(s): Massimo De Gregorio, Maurizio GiordanoAbstractWiSARD belongs to the class of weightless neural networks, and it is based on a neural model which uses lookup tables to store the function computed by each neuron rather than storing it in weights of neurons’ connections. WiSARD is characterised by a simple implementation and a fast learning phase due to one-way RAM access/lookup mechanism. WiSARD was originally conceived as a pattern recognition device mainly focusing on image processing. In this work we present a multi-class classification method in machine learning domain based on WiSARD, called WiSARD Classifier. The method uses the same binary encoding scheme to transform multivariable data in the domain of real numbers into binary patterns which are the input to WiSARD. The main contribution of this work is an extensive experimental evaluation of WiSARD's classification capability in comparison to methods from the state-of-the-art. For the purpose we conducted many experiments applying nine well known machine learning methods (including the WiSARD Classifier) to seventy classification problems. Cross-validation accuracies were collected and compared by means of a statistical analysis based on nonparametric tests (Friedman, Friedman Aligned Rank, and Quade test) to prove how the WiSARD Classifier is very close in performance to the best methods available in most popular machine learning libraries.
       
  • Computing with words for health monitoring of people suffering with heart
           diseases
    • Abstract: Publication date: Available online 11 August 2018Source: Applied Soft ComputingAuthor(s): Prashant K. Gupta, Pranab K. MuhuriAbstractDevelopments in medical science has provided new ways in which care can be taken of people suffering from the risk of heart failure at reduced medical expenses such as through wearable sensors. It is more efficient than the traditional health monitoring methods such as, in person visits to the medical practitioners, clinics, etc. Unfortunately wearable sensors are useful to measure quantitative parameters such as blood pressure, heart rate but not the ones that are qualitative in nature like ease of respiration, pain, etc. The values of qualitative parameters are generally expressed by a sick person in the form of ‘words’. In a real life scenario medical experts suggest plausible medical tests/ treatment to a patient using their experience based on his/ her feedback in terms of ‘words’. So, in this paper we propose a new approach, called the heart monitoring through perceptual computing (HMT Per-C), that assesses the medical condition of the person (under the risk of heart failure) by processing the user feedback in terms of ‘words’ and generates recommendations about the medical attention needed to be given to him/ her. HMT Per-C is based on the technique of perceptual computing, which is a computing with words (CWW) technique that models ‘words’ using the interval type-2 fuzzy sets. We have also compared the recommendations generated by perceptual computing to those generated by other CWW approaches viz., extension principle, symbolic method and 2-tuple. We have found that extension principle, symbolic method and 2-tuple failed to give accurate results in 8%, 44% and 28% cases, respectively. Therefore, we believe that our proposed approach, HMT Per-C, is better, more user-friendly and close to the real life scenario. An outcome of the present work is the ready to use mobile app, “HMT Per-C”, that complements the data obtained from the devices like oximeter, etc. but does not replace them. It can be downloaded freely from http://sau.ac.in/~cilab/.
       
  • Optimal multivariable conditions in the operation of an absorption heat
           transformer with energy recycling solved by the genetic algorithm in
           artificial neural network inverse
    • Abstract: Publication date: Available online 9 August 2018Source: Applied Soft ComputingAuthor(s): R.A. Conde-Gutiérrez, U. Cruz-Jacobo, A. Huicochea, S.R. Casolco, J.A. HernándezThis research presents an application of an artificial neural network inverse (ANNi) and genetic algorithm (GA) to propose and solve a multivariable function in order to optimize of an absorption heat transformer (AHT) with energy recycling. The purpose of the research is to provide a method capable of maximizing the coefficient of performance (COP) of the AHT, by finding multiple optimal input variables, with the benefit of minimizing energy in the heat supply of the equipment. AHT use waste heat sources to obtain useful energy, but by recycling part of the useful energy within the same system provides an increase in performance. Therefore, the research is focused on optimizing the heat source in the generator, evaporator and additionally the condenser simultaneously, since the value of the COP is determined through the ratio of the useful heat load between the loads of heat supplied. Consequently, this study is based on modeling the process and then performing the optimization, using the following research methods: artificial neural networks (ANN), ANNi and GA. An ANN model was developed to predict the value of COP, based on experimental data from the equipment. A satisfactory agreement was obtained by comparing the simulated and experimental data. With the ANN model consolidated, an ANNi was applied where the variables to be optimized were: temperature in the generator, temperature in the evaporator and temperature in the condenser. GA was chosen to solve the multivariable function. The results showed that when applying the ANNi-GA methodology, it is possible to carry out the multivariable optimization, since this methodology had only been used to optimize one variable at a time. The temperature in the generator turned out to be the key variable to increase the performance of AHT, followed by the evaporator and condenser, managing to maximize the COP value of a specific test from 0.26 to 0.43 and obtain an energy saving of up to 3 °C with a maximum computation time of 5.38 s. With the results obtained, the research effect provides a feasible method to control multiple input variables of an AHT, from a desired COP value. By properly supplying the waste heat to the AHT, it is possible to minimize the energy consumption and experimentation time, since the proposed method determines the best scenario to obtain good results, instead of performing different experimental tests. Finally the GA solved satisfactorily the ANNi multivariable function proposed, making feasible the use of this tool to optimize different variables at the same time.Graphical Graphical abstract for this article
       
  • Fuzzy threshold for the initiation of sediment motion
    • Abstract: Publication date: Available online 9 August 2018Source: Applied Soft ComputingAuthor(s): Mike Spiliotis, Vasileios Kitsikoudis, V.S. Ozgur Kirca, Vlassios HrissanthouAbstractThe present paper considers the threshold for the initiation of sediment motion to be a fuzzy set by taking into account the uncertainty related to individual sediment positioning and turbulence intensity. Incipience of sediment motion across a stream-bed occurs gradually, and a fuzzy set facilitates the notation of partial sediment transport in the computations. For the derivation of this fuzzy threshold, the formula developed in Zanke, U.C.E. (2003), "On the influence of turbulence on the initiation of sediment motion", Int. J. Sediment Res., 18(1), 17-31, for the computation of dimensionless critical shear stress is extended accordingly by using, instead of crisp values in order to describe the angle of grain contact and the turbulence intensity, fuzzy numbers. This can be achieved by exploiting the extension principle of fuzzy sets and logic. Hence, the proposed formula generates two three-dimensional surfaces by means of the extension principle of fuzzy sets, which define the lower and upper limits of the dimensionless critical shear stress membership function with respect to the shear Reynolds number and the relative roughness. The benefit of this approach, when compared to an approach that solely utilizes characteristic or average values, is that it can predict partial sediment transport of the most susceptible to movement particles, which is very common in gravel-bed streams even for bankfull flow conditions. In addition, a measure to compare the produced fuzzy dimensionless critical shear stress with the exerted dimensionless shear stress, is proposed, which is based on the concept of fuzzy subtraction and takes into account the whole shape of the membership function. To justify the proposed methodology, the produced results are compared with experimental data, and useful conclusions are drawn. Based on the fuzzy extension of the physically-based equation of Zanke (2003), a fuzzy band is produced which includes almost all the used experimental data with a functional spread.
       
  • Symbolic interpretation of artificial neural networks based on
           multiobjective genetic algorithms and association rules mining
    • Abstract: Publication date: Available online 9 August 2018Source: Applied Soft ComputingAuthor(s): Dounia Yedjour, Abdelkader BenyettouAbstractRule extraction from neural networks is an important task. In practice, decision makers often settle for using less accurate, but comprehensible models, typically decision trees where the solutions are given in graphical form easily interpretable. The black-box rule extraction techniques, operating directly on the input-output relationship, are clearly superior to the restricted open-box methods, normally tailored for a specific architecture. This is especially important since most data miners today will use some kind of ensemble (instead of a single model) to maximize accuracy. Consequently, the ability to extract rules from any opaque model is a key demand for rule extraction techniques. This paper proposes a new multiobjective genetic method to extract knowledge from trained artificial neural network by using the association rules technique. The main aim of this hybridization is to extract the optimal rules from the neural network for further classification. The algorithm consists of two stages: the rule filtering phase which eliminates misleading rules by taking into account the support, the confidence and the lift measures, then, rule set optimization phase which finds the set of optimal rule sets by considering fidelity, coverage and complexity measures. The algorithm is evaluated on 05 UCI datasets. The experimental results show that the proposal provides interesting rules. Accuracy and comprehensibility are clearly improved, and subsequently, it can become a challengeable and trustful research field in the area of neural network rule extraction.
       
  • Two bi-objective hybrid approaches for the frequent subgraph mining
           problem
    • Abstract: Publication date: Available online 9 August 2018Source: Applied Soft ComputingAuthor(s): Sidali Hocine Farhi, Dalila BoughaciAbstractThis paper proposes two hybrid bi-objective evolutionary algorithms to solve the frequent subgraph mining problem. In this contribution, we propose to improve the search ability of the stochastic local search (SLS) and the variable neighborhood search (VNS) algorithms by adding genetic operators (crossover and mutation) and Pareto dominance concept. A mined subgraph is defined by a bi-objective function which uses two parameters, support and size. We combine GA and SLS in a hybrid method denoted GASLS and GA with VNS in a hybrid method denoted GAVNS to solve the considered problem. The two proposed methods are implemented and evaluated on two synthetic and five real-world datasets of various sizes and their performance were compared against a single-objective stochastic local search algorithm and the well-known NSGA-II algorithm. The proposed methods are able to discover efficiently diversified subgraphs in the search space by exploring new solutions. The numerical results show that in general the GASLS method provides competitive results and finds high quality solutions compared to the other considered algorithms.
       
  • Bonferroni induced heavy operators in ERM decision-making: A case on large
           companies in Colombia
    • Abstract: Publication date: Available online 7 August 2018Source: Applied Soft ComputingAuthor(s): Fabio Blanco-Mesa, Ernesto León-Castro, José M. MerigóAveraging aggregation operators analyze a set of data providing a summary of the results. This study focuses on the Bonferroni mean and the induced and heavy aggregation operators. The aim of the work is to present new aggregation operators that combine these concepts forming the Bonferroni induced heavy ordered weighted average and several particular formulations. This approach represents Bonferroni means with order inducing variables and with weighting vectors that can be higher than one. The paper also develops some extensions by using distance measures forming the Bonferroni induced heavy ordered weighted average distance and several particular cases. The study ends with an application in a large companies risk management problem in Colombia. The main advantage of this approach is that it provides a more general framework for analyzing the data in scenarios where the numerical values may have some complexities that should be assessed with complex attitudinal characters.Graphical Graphical abstract for this article
       
  • An efficient matheuristic for the minimum-weight dominating set problem
    • Abstract: Publication date: Available online 7 August 2018Source: Applied Soft ComputingAuthor(s): Mayra Albuquerque, Thibaut VidalAbstractA minimum dominating set in a graph is a minimum set of vertices such that every vertex of the graph either belongs to it, or is adjacent to one vertex of this set. This mathematical object is of high relevance in a number of applications related to social networks analysis, design of wireless networks, coding theory, and data mining, among many others. When vertex weights are given, minimizing the total weight of the dominating set gives rise to a problem variant known as the minimum weight dominating set problem. To solve this problem, we introduce a hybrid matheuristic combining a tabu search with an integer programming solver. The latter is used to solve subproblems in which only a fraction of the decision variables, selected relatively to the search history, are left free while the others are fixed. Moreover, we introduce an adaptive penalty to promote the exploration of intermediate infeasible solutions during the search, enhance the algorithm with perturbations and node elimination procedures, and exploit richer neighborhood classes. Extensive experimental analyses on a variety of instance classes demonstrate the good performance of the algorithm, and the contribution of each component in the success of the search is analyzed.
       
  • Extended Belief-Rule-Based System with New Activation Rule Determination
           and Weight Calculation for Classification Problems
    • Abstract: Publication date: Available online 6 August 2018Source: Applied Soft ComputingAuthor(s): Long-Hao Yang, Jun Liu, Ying-Ming Wang, Luis MartínezAbstractAmong many rule-based systems employed to deal with classification problems, the extended belief-rule-based (EBRB) system is an effective and efficient tool and also has potentials in handing both quantitative and qualitative information under uncertainty. Despite many advantages, several drawbacks must be overcome for better applying the conventional EBRB system, including counterintuitive individual matching degrees, insensitivity to the calculation of individual matching degrees, and the inconsistency problem. Accordingly, by constructing the activation region of extended belief rules and revising the calculation formula of activation weights, the new procedures of activation rule determination and weight calculation are proposed to improve the conventional EBRB system, while the original procedures of rule inference and class estimation are retained from the conventional EBRB system. Nineteen classification datasets with different numbers of classes are studied to validate the efficiency and effectiveness of the proposed EBRB classification system compared with existing works. The comparison results demonstrate that the proposed EBRB classification system not only obtains a high accuracy better than the conventional EBRB system, but also has an excellent response time for classification. More importantly, the results derived from multi-class datasets show the significant performance of the proposed EBRB classification system compared with some state of art classification tools.
       
  • Indirect adaptive robust mixed H2/H∞ general type-2 fuzzy control of
           uncertain nonlinear systems
    • Abstract: Publication date: Available online 6 August 2018Source: Applied Soft ComputingAuthor(s): F. Baghbani, M.-R. Akbarzadeh-T., Alireza AkbarzadehGeneral type-2 fuzzy logic offers an added ability to manage uncertainties, but it also carries a high computational burden and added mathematical complexity. Further development of this paradigm is hence considerably challenged in terms of both time-constrained industrial implementations and mathematical guarantees of performance. In this paper, we propose a stable indirect adaptive robust mixed H2/H∞ control approach to an α-plane representation of a general type-2 fuzzy framework. The resulting structure is shown to be both computationally efficient and amenable to theoretical analysis. Specifically, this hybrid paradigm is mathematically derived to minimize energy consumption, keep all signals bounded during the transient states, guide the tracking error to zero, and keep the effect of uncertainties such as the external disturbances below a desired level. To validate these theoretical results, two sets of simulations on an inverted pendulum and a robotic manipulator with time-varying payloads and friction disturbances are performed. To illustrate real-time computational efficacy, the proposed algorithm is then experimentally implemented on a 3-PSP parallel robot in our laboratory. Comparative analysis indicates that the proposed method achieves smaller tracking error, less consumed control energy, and better robustness against uncertainties when compared with its type-1 fuzzy, interval type-2 fuzzy counterparts. This is while computational time for each control cycle increases only slightly when compared with its interval type-2 counterpart.Graphical abstractGraphical abstract for this article
       
  • A NEW HYBRID ANT COLONY OPTIMIZATION ALGORITHM FOR SOLVING THE NO-WAIT
           FLOW SHOP SCHEDULING PROBLEMS
    • Abstract: Publication date: Available online 6 August 2018Source: Applied Soft ComputingAuthor(s): Orhan Engin, Abdullah GüçlüThis paper proposes an effective new hybrid ant colony algorithm based on crossover and mutation mechanism for no-wait flow shop scheduling with the criterion to minimize the maximum completion time. The no-wait flow shop is known as a typical NP-hard combinational optimization problem. The hybrid ant colony algorithm is applied to the 192 benchmark instances from literature in order to minimize makespan. The performance of the proposed Hybrid Ant Colony algorithm is compared to the Adaptive Learning Approach and Genetic Heuristic algorithm which are used in previous studies to solve the same set of benchmark problems. The computational experiments show that the proposed Hybrid Ant Colony algorithm provides better results relative to the other algorithms.Graphical abstractGraphical abstract for this article
       
  • Assessment of Grapevine Variety Discrimination Using Stem Hyperspectral
           Data and AdaBoost of Random Weight Neural Networks
    • Abstract: Publication date: Available online 4 August 2018Source: Applied Soft ComputingAuthor(s): Armando Fernandes, Andrei Utkin, José Eiras-Dias, José Silvestre, Jorge Cunha, Pedro Melo-PintoAbstractGrapevine variety discrimination is important for growers to be able to certify plants, however, this is not simple to do using conventional ampelometric methods or DNA based methods. The present work proposes, to the best of the authors' knowledge, for the first time, to differentiate grapevine varieties using only spectroscopic stem information. A total of 1200 measurements were gathered non-destructively in the field. The spectroscopic data was processed with a combination of AdaBoost and random weight neural networks (RWNN) which is still rare in scientific literature. Ten grapevine varieties, 5 red and 5 white, were used: Cabernet Sauvignon (CSvar), one of the most widely planted varieties worldwide, and nine Portuguese autochthonous varieties, including Touriga Franca and Touriga Nacional (TNvar) that are very important in Port wine production. Portugal is known to be one of the countries with the largest number of autochthonous varieties in the world. In general, the overall correct classification percentages were significantly better than the 10% level of random classification. True positive rates for a classifier separating ten varieties varied between 41.7 and 70.8%, depending on the variety. The false positive rate (FPR) varied between 2.5 and 7.3%. TNvar and CSvar measurements were correctly classified in 70.8 and 70% of cases with FPR of 5.3 and 3.3%, respectively. The algorithms AdaBoost of extreme learning machines, random vector functional link with pseudoinverse and no regularization, support vector machines and random forest did not provide better results than the AdaBoost of RWNN.
       
  • An Optimum ANN-based Breast Cancer Diagnosis: Bridging Gaps between ANayN
           Learning and Decision-making Goals
    • Abstract: Publication date: Available online 4 August 2018Source: Applied Soft ComputingAuthor(s): Ruholla Jafari-Marandi, Samaneh Davarzani, Maryam Soltanpour Gharibdousti, Brian K. SmithAbstractIt is difficult to overestimate the importance of appropriate breast cancer diagnosis, as the disease ranks second among all cancers that lead to death in women. Many efforts propose data analytic tools that succeed in predicting breast cancer with high accuracy; the literature is abundant with studies that report close-to-perfect prediction rates. This paper shifts the focus of improvement from higher accuracy towards better decision-making. Quantitatively, we have shown more accuracy does not always lead to better decisions, and the process of Artificial Neural Networks (ANN) learning can benefit from inculcation of decision-making goals. We have proposed a decision-oriented ANN classification method called Life-Sensitive Self-Organizing Error-Driven (LS-SOED), which enhances ANN’s performance in decision-making. LS-SOED combines the supervised and unsupervised learning power of ANN to handle the inconclusive nature of hidden patterns in the data in such way that the best possible decisions are made, i.e. the least misclassification cost (the minimum possible loosing of life) is achieved. The learning power of SOED matches, if not excels, the best performances reported in the literature when the objective is to achieve the highest accuracy. When the objective is to minimize misclassification costs, we have shown, on average, in one dataset more than 30 years of life for a group of 283 people, and in another more than 8 years of life for a group of 57 people can be saved collectively.
       
  • Genetic ACCGA: Co-similarity based Co-clustering using genetic algorithm
    • Abstract: Publication date: November 2018Source: Applied Soft Computing, Volume 72Author(s): Syed Fawad Hussain, Shahid IqbalAbstractCo-clustering refers to the simultaneous clustering of objects and their features. It is used as a clustering technique when the data exhibit similarities only in a subset of features instead of the whole feature set. Clustering (and co-clustering) has been proven to be an optimization problem which makes evolutionary algorithms a suitable candidate for optimizing the cluster labels. Genetic algorithms have been used in the literature for data clustering by optimizing cluster labels to reduce mean distance from cluster centers. Using only genetic operators and Euclidean distances, however, have resulted in limited success. In this paper, we propose to use a Genetic Algorithm framework for co-clustering data. What makes this contribution significant and distinctly unique is that we propose the use of a co-similarity objective function that uses multiple objective functions to seamlessly integrate the co-clustering framework into the optimization problem. Co-similarity matrices are intertwined row and column similarity matrices that are computed on the basis of each other. To the best of our knowledge, we are the first to propose the use of Genetic Algorithm to optimize co-similarity matrices for the co-clustering task. We conduct several experiments to analyse the performance of our proposed approach and compare them with numerous state-of-the-art clustering and co-clustering algorithms, on a variety of real world datasets. Our results show that the proposed approach significantly outperforms other clustering and co-clustering algorithms on all the datasets tested.
       
  • E-BRM: A constraint handling technique to solve optimization problems with
           evolutionary algorithms
    • Abstract: Publication date: November 2018Source: Applied Soft Computing, Volume 72Author(s): Max de Castro Rodrigues, Solange Guimarães, Beatriz Souza Leite Pires de LimaAbstractThis work presents an enhanced technique to handle constraints in optimization problems solved by evolutionary algorithms: the extended balanced ranking method (E-BRM). It comprises a self-adaptive procedure that deals with two ranking lists, for feasible and infeasible solutions, which are merged during the search process according to the amount of feasible and infeasible candidates within the population.The method is designed to work uncoupled from the optimization algorithm, thus being suitable for implementation along with distinct algorithms. Therefore, to evaluate the proposed constraint-handling technique (CHT), and compare its performance with other CHTs that follow the same uncoupled approach, we have implemented them all into a canonical Genetic Algorithm (GA). Several tests were executed over well-known suites of benchmark functions and also engineering problems. The results provided by the different CHTs are compared not only by direct comparisons, but also by nonparametric statistical Sign test, and Wilcoxon Signed-Rank test. Those comparisons indicate that the E-BRM presented an improved overall performance.
       
  • A Grey Wolf Optimizer-based neural network coupled with response surface
           method for modeling the strength of siro-spun yarn in spinning mills
    • Abstract: Publication date: November 2018Source: Applied Soft Computing, Volume 72Author(s): Esmaeil Hadavandi, Sobhan Mostafayi, Parham SoltaniAbstractThe tenacity of spun yarns is related to many process parameters and fiber properties. Different types of predictive models have been developed to predict the spun yarns tensile strength; however, no investigation has yet been carried out on prediction of siro-spun yarns tensile strength. This is due to the fact that the relationship between yarn strength and fiber properties and process parameters is essentially non-linear and therefore the prediction of yarn tenacity is a highly complex issue. This paper proposes a Grey Wolf Optimizer (GWO) based Neural Network Simulator called “GWNN” for prediction of siro-spun yarn tensile strength. In the proposed GWNN, a GWO algorithm is applied as a global search method to determine weights of a Multi-Layer Perception (MLP). Additionally, a new Response Surface Methodology (RSM) is proposed to determine appropriate level of fiber and yarn linear densities and processing parameters on yarn tensile strength. The proposed RSM uses the GWNN model as a non-linear response surface simulator and has higher accuracy than the classical RSM. The prediction accuracy of the GWNN was compared with that of a MLP neural network trained with Back-Propagation (BP) algorithm and a Multiple Linear Regression model as well as three evolutionary-based neural networks. It was found that the proposed GWNN enjoys higher accuracy as compared with other models. Additionally, the observed trends in variation of yarns tensile strength with input variables were discussed with reference to inner structure of yarns.
       
  • Integration of Z-numbers and Bayesian Decision Theory: A Hybrid Approach
           to Decision Making under Uncertainty and Imprecision
    • Abstract: Publication date: Available online 3 August 2018Source: Applied Soft ComputingAuthor(s): Nina Marhamati, Elham K Buxton, Shahram RahimiAbstractFor over four decades decision support systems have been emerging to facilitate decision making under uncertain conditions and in a rapidly changing unstructured environment. Most decision support approaches, such as Bayesian decision theory and computing with words (CW), compare and analyze the consequences of different decision alternatives. Bayesian decision methods use probability theory to handle uncertainty and have been widely used in different areas for estimation and prediction as well as offering decision support. On the other hand, computing with words and approximate reasoning apply fuzzy set theory to deal with imprecise measurements and inexact information and are most concerned with propositions stated in natural language. The concept of a Z-number [1] has been recently introduced to represent propositions and their reliability in natural language. This work proposes a methodology that integrates Z-numbers and Bayesian decision theory to provide decision support when precise measurements and exact values of parameters and probabilities are not available. The relationships and computing methods required for such integration are derived and mathematically proved. The proposed hybrid methodology benefits from both approaches and provides a decision support system based on imprecise and uncertain information drawn from natural language. In order to demonstrate the proof of concept, the proposed methodology has been applied to a realistic case study on breast cancer diagnosis.
       
  • Efficiency Evaluation in Two-stage Data Envelopment Analysis under a Fuzzy
           Environment: A Common-Weights Approach
    • Abstract: Publication date: Available online 2 August 2018Source: Applied Soft ComputingAuthor(s): Adel Hatami-Marbini, Saber SaatiAbstractData envelopment analysis (DEA) has been genuinely known as an impeccable technique for efficiency measurement. In practice, since many production systems such as broadcasting companies, banking and R&D activities include two processes connected in series, we have need of utilizing two-stage DEA models to identify the sources of inefficiency and explore in turn appropriate options for improving performance. The lack of the ability to generate the actual weights is not only an ongoing challenge in traditional DEA models, it can have serious repercussion for the contemporary DEA models (e.g., two-stage DEA). This paper presents a common-weights method for two-stage structures that allows us to consider equality of opportunity in a fuzzy environment when evaluating the system efficiency and the component process efficiencies. The proposed approach first seeks upper bounds on factor weights and then determines a set of common weights by a single linear programming problem. We illustrate the approach with a data set taken from the literature.
       
  • Collaborative model based UAV tracking via local kernel feature
    • Abstract: Publication date: Available online 1 August 2018Source: Applied Soft ComputingAuthor(s): Yong Wang, Xinbin Luo, Lu Ding, Shan Fu, Shiqiang HuAbstractPartial occlusion is one of a challenging problem in unmanned aerial vehicle (UAV) tracking. In this paper, we propose a novel collaborative model based tracking method which attempts to exploit a holistic model and a part model that tracks an object consistently through the entire video sequence. Specifically, we first develop a robust local kernel feature which learns the data around to encode the geometric information of the object. Next, the target is divided into four parts. And structure support vector machine (SSVM) is employed to integrate with the local feature to a robust visual tracking framework. Furthermore, we adopt a reliable metric to measure the reliability of a patch. Kalman filter is used to fuse the holistic model and part model tracking results smoothly according to the metric. Extensive experimental results demonstrate our tracker achieves comparable performance to state-of-the-art methods.
       
  • A multi criteria comprehensive evaluation approach for emergency response
           capacity with interval 2-tuple linguistic information
    • Abstract: Publication date: Available online 31 July 2018Source: Applied Soft ComputingAuthor(s): Kaixuan Qi, Qingsong Wang, Qiangling Duan, Liang Gong, Jinhua Sun, K.M. Liew, Lin JiangAbstractEmergency response capacity evaluation (ERCE) is an important and challenging issue in emergency management with the frequent occurrence of emergency events. Because of their different backgrounds and preferences, decision makers (DMs) often prefer to use linguistic variables from different linguistic term sets to express their decision information about the importance and the ratings of multi criteria involved in ERCE issue, some of which may be imprecise and uncertain. Therefore, this paper proposes a multi criteria comprehensive evaluation (MCCE) approach for ERCE with interval 2-tuple linguistic information. First, a new method for comparison between interval 2-tuples is proposed. Then, the classic analytical hierarchy process is extended into the ITL-AHP, in which the interval 2-tuple linguistic preference relation (ITLPR) and its multiplicative consistency are defined. An iterative algorithm to improve the consistency level of an ITLPR is proposed. Based on improved ITLPRs, the priority weights of criteria are derived by the normalizing rank summation method. In individual decision information collecting processes, both subjective weights and objective weights of DMs are considered. Furthermore, some interval 2-tuple interval weighted aggregation operators in which the weight vectors take the form of interval 2-tuple linguistic information are developed. Finally, a numerical illustration and comparison analyses are provided to elucidate the feasibility and superiority of the proposed method. The proposed method can provide a flexible way to perceive and appraise the emergency response capacity (REC) and an effective tool to handle the emergency management performance evaluation and decision making under fuzzy and uncertain information environment.
       
  • Uncertainty Representation using Fuzzy-Entropy Approach: Special
           
    • Abstract: Publication date: Available online 31 July 2018Source: Applied Soft ComputingAuthor(s): AbstractRemotely sensed high-resolution satellite images contain various information in context of changes. By analyzing this information very minutely, changes occurred in various atmospheric phenomena can be identified. Therefore, in this study, a novel change detection method is proposed using the fuzzy set theory. The proposed method represents the uncertain changes in the form of a fuzzy set using the corresponding degree of membership values. By using the fuzzy set operators, such as max and min functions, this study derives very useful information from the images. This study also proposes a new function to identify the boundary of uncertain changes. Further, this study is propagated to identify the similarity or dissimilarity between different images of the same event that contain various uncertain changes. To recognize the changes in a fine-grained level, this study introduces a way to represent the fuzzy information in a granular way. The utilization of the proposed method is shown by recognizing changes and retrieving information from the remotely sensed high-resolution satellite images. Various experimental results exhibit the robustness of the study.
       
  • FIL-DGA Based Hardware Optimization System
    • Abstract: Publication date: Available online 30 July 2018Source: Applied Soft ComputingAuthor(s): Soumyadip Das, Sumitra MukhopadhyayThis paper presents a new algorithm entitled as dominant character genetic algorithm (DGA) along with its hardware architecture entitled as dominant character genetic algorithm hardware architecture (DGA-Arch) for real parameter optimization problem. In DGA, the evolution process is inspired from the dominant characteristics present in human cognizance and it is realized by varying the mutation probability of the genes. On the other hand, DGA-Arch is a resource efficient, highly flexible architecture which is designed and integrated with field programmable gate array-in-loop (FIL) environment and an overall FIL based DGA (FIL-DGA) optimization system is developed. The DGA-Arch was implemented on Virtex IV (ML401, XC4VLX25) field programmable gate array (FPGA) chip with maximum of 5% logic slice utilization and tested for 18 benchmark problems. On an average, the proposed hardware manifested speedup of about 130x over software genetic algorithm (GA) implementation for the test problems. The performance is also compared using 5 modified functions with different GA based hardware reported in existing literature and is found to optimize problems more accurately with greater repeatability and diversity. The DGA-Arch reached convergence within 0.0005% to 0.009% of function evaluations compared to the total search space and requires almost no repeated synthesis in different problem environment. Later, the FIL-DGA system has been employed to adapt the parameters of few classical engineering problems and a real world application in cognitive radio environment.Graphical abstractGraphical abstract for this article
       
  • An improved grey model optimized by multi-objective ant lion optimization
           algorithm for annual electricity consumption forecasting
    • Abstract: Publication date: Available online 29 July 2018Source: Applied Soft ComputingAuthor(s): Jianzhou Wang, Pei Du, Haiyan Lu, Wendong Yang, Tong NiuAbstractAccurate and stable annual electricity consumption forecasting play vital role in modern social and economic development through providing effective planning and guaranteeing a reliable supply of sustainable electricity. However, establishing a robust method to improve prediction accuracy and stability simultaneously of electricity consumption forecasting has been proven to be a highly challenging task. Most previous researches only pay more attention to enhance prediction accuracy, which usually ignore the significant of forecasting stability, despite its importance to the effectiveness of forecasting models. Considering the characteristics of annual power consumption data as well as one criterion i.e. accuracy or stability is insufficient, in this study a novel hybrid forecasting model based on an improved grey forecasting mode optimized by multi-objective ant lion optimization algorithm is successfully developed, which can not only be utilized to dynamic choose the best input training sets, but also obtain satisfactory forecasting results with high accuracy and strong ability. Case studies of annual power consumption datasets from several regions in China are utilized as illustrative examples to estimate the effectiveness and efficiency of the proposed hybrid forecasting model. Finally, experimental results indicated that the proposed forecasting model is superior to the comparison models.
       
  • Continuous Greedy Randomized Adaptive Search Procedure for data clustering
    • Abstract: Publication date: Available online 20 July 2018Source: Applied Soft ComputingAuthor(s): Eduardo Queiroga, Anand Subramanian, Lucídio dos Anjos F. CabralAbstractCluster analysis is an unsupervised machine learning task that aims at finding the most similar groups of objects, given a prespecified similarity measure. When modeled as an optimization problem, clustering problems generally are NP-hard. Therefore, the use of metaheuristic approaches appears to be a promising alternative. In this paper, a continuous greedy randomized adaptive search procedure (C-GRASP) approach is proposed to solve a partitional clustering problem that aims at minimizing the intra-cluster distances. Computational experiments carried out on existing databases showed that the results obtained by the proposed algorithm was, on average, superior to those found by other well-known metaheuristics, as well as to those achieved by state-of-the-art algorithms from the literature.
       
  • Hybridizing Bat Algorithm with Artificial Bee Colony for Combined Heat and
           Power Economic Dispatch
    • Abstract: Publication date: Available online 18 July 2018Source: Applied Soft ComputingAuthor(s): R. Murugan, M.R. Mohan, C. Christober Asir Rajan, P. Deiva Sundari, S. ArunachalamABSTRACTThis paper presents a new algorithm based on hybridizing Bat Algorithm (BA) and Artificial Bee Colony (ABC) with Chaotic based Self-Adaptive (CSA) search strategy (CSA-BA-ABC) to solve the large-scale, highly non-linear, non-convex, non-smooth, non-differential, non-continuous, multi-peak and complex Combined Heat and Power Economic Dispatch (CHPED) problems. The proposed hybrid algorithm has better capability to escape from local optima with faster convergence rate than the standard BA and ABC. The proposed algorithm works based on the three mechanisms. The first one is a novel adaptive search mechanism, in which one of the three search phases (BA phase, directed onlooker bee phase and modified scout bee phase) is selected based on the aging level of the individual’s best solution (pbest). In this regard, ABC’s phases can assist BA phase to search based on deeper exploration /exploitation pattern as an alternative. In periodic intervals, the second mechanism called as CSA updates algorithm control parameters using chaotic system based on prevailing search efficiency in the swarm. Lastly, the third mechanism is enhancing the algorithm performance by incorporating individual’s directional information, habitat selection and self-adaptive compensation. The effectiveness and robustness of the proposed algorithm are tested on a set of 23 benchmark functions and three CHPED problems. The obtained results by the suggested algorithm in terms of quality solution, computational performance and convergence characteristic are compared with various algorithms to show the ability of the proposed approach and its robustness in finding a better cost- effective solution.
       
  • Modified Bayesian data fusion model for travel time estimation considering
           spurious data and traffic conditions
    • Abstract: Publication date: Available online 17 July 2018Source: Applied Soft ComputingAuthor(s): Mil Soknath, Piantanakulchai MongkutAbstractThis paper presents a framework for the development of the travel time estimation model using multiple sources of data with consideration of spurious data and traffic conditions. A modified Bayesian data fusion approach, combined with the Gaussian mixture model, is used to fuse the travel time data, which are estimated from different types of sensors to improve accuracy, precision, as well as completeness of data, in terms of spatial and temporal distribution. Two additional features are added into existing models including the difference of traffic conditions classified by the Gaussian mixture model and the bias estimation from individual sensor by introducing a non-zero mean Gaussian distribution which learned from the training dataset. The methodology and computational procedure are presented. The Gaussian mixture model is used to classify states of traffic into predefined number of traffic regimes. Once a traffic condition is classified, the modified Bayesian data fusion approach is used to estimate travel time. The proposed model provides explicit advantages over the basic Bayesian approach, such as being robust to noisy data, reducing biases of an individual estimation, and producing a more precise estimation of travel time. Two different real world datasets and one simulated dataset are used to evaluate the performance of the proposed model under three different traffic regimes: free flow, transitional flow and congested flow regimes. The results when compared with the results from benchmark models show significant improvement in the accuracy of travel time estimation in terms of mean absolute percentage errors (MAPE) in the range of 3.46% to 16.3%.
       
  • Semi-supervised learning based distributed attack detection framework for
           IoT
    • Abstract: Publication date: Available online 1 July 2018Source: Applied Soft ComputingAuthor(s): Shailendra Rathore, Jong Hyuk ParkAbstractAlongside the development of Internet of Things (IoT), security attacks are also increasing day by day. A number of centralized attack detection mechanisms have been proposed to detect attacks in IoT, wherein an attack detection system is deployed at the central point in the network that collects data from the network and classifies it as “attack” or “normal” using a supervised machine learning algorithm. Note, however, that these mechanisms have failed to achieve significant results due to the distinct requirements of IoT devices, such as scalability, distribution, resource limitations, and low latency. Moreover, the application of supervised machine learning for classification needs a significant amount of labeled data. In this paper, we introduce a fog-based attack detection framework that relies on the fog computing paradigm and a newly proposed ELM-based Semi-supervised Fuzzy C-Means (ESFCM) method. As an extension of cloud computing, fog computing enables attack detection at the network edge and supports distributed attack detection. The ESFCM method uses a semi-supervised fuzzy c-means algorithm to handle the labeled data issue and an Extreme Learning Machine (ELM) algorithm to provide good generalization performance at a faster detection rate. The evaluation was performed on the NSL-KDD dataset, demonstrating that the proposed framework achieved better performance than the centralized attack detection framework. More specifically, it recorded a lower detection time of 11 milliseconds and an accuracy rate of 86.53%.
       
  • A Genetic Programming Method for Feature Mapping to Improve Prediction of
           HIV-1 Protease Cleavage Site
    • Abstract: Publication date: Available online 30 June 2018Source: Applied Soft ComputingAuthor(s): Abdolhossein Fathi, Rasool SadeghiAbstractThe human immunodeficiency virus (HIV) is the cause of acquired immunodeficiency syndrome (AIDS), which has profound implications in terms of both economic burden and loss of life. Modeling and examination of the HIV protease cleavage of amino acid sequences can contribute to control of this disease and production of more effective drugs. The present paper introduces a new method for encoding and characterization of amino acid sequences and a new model for the prediction of amino acid sequence cleavage by HIV protease. The proposed encoding scheme utilizes a combination of amino acids’ spatial and structural features in conjunction with 20 amino acid sequences to make sure that their physicochemical and sequencing features are all taken into account. The proposed HIV-1 amino acid cleavage prediction model is developed with the combination of genetic programming and support vector machine. The results of evaluations performed on various datasets demonstrate the superior performance of the proposed encoding and better accuracy of the proposed HIV-1 cleavage prediction model as compared to the state-of-the-art methods.
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.162.159.33
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-