for Journals by Title or ISSN
for Articles by Keywords
  Subjects -> COMPUTER SCIENCE (Total: 2011 journals)
    - ANIMATION AND SIMULATION (30 journals)
    - AUTOMATION AND ROBOTICS (98 journals)
    - COMPUTER ARCHITECTURE (9 journals)
    - COMPUTER ENGINEERING (9 journals)
    - COMPUTER GAMES (16 journals)
    - COMPUTER PROGRAMMING (24 journals)
    - COMPUTER SCIENCE (1172 journals)
    - COMPUTER SECURITY (46 journals)
    - DATA BASE MANAGEMENT (13 journals)
    - DATA MINING (32 journals)
    - E-BUSINESS (22 journals)
    - E-LEARNING (29 journals)
    - IMAGE AND VIDEO PROCESSING (39 journals)
    - INFORMATION SYSTEMS (108 journals)
    - INTERNET (92 journals)
    - SOCIAL WEB (50 journals)
    - SOFTWARE (34 journals)
    - THEORY OF COMPUTING (8 journals)

COMPUTER SCIENCE (1172 journals)                  1 2 3 4 5 6 | Last

Showing 1 - 200 of 872 Journals sorted alphabetically
3D Printing and Additive Manufacturing     Full-text available via subscription   (Followers: 15)
Abakós     Open Access   (Followers: 4)
ACM Computing Surveys     Hybrid Journal   (Followers: 24)
ACM Journal on Computing and Cultural Heritage     Hybrid Journal   (Followers: 9)
ACM Journal on Emerging Technologies in Computing Systems     Hybrid Journal   (Followers: 13)
ACM Transactions on Accessible Computing (TACCESS)     Hybrid Journal   (Followers: 4)
ACM Transactions on Algorithms (TALG)     Hybrid Journal   (Followers: 16)
ACM Transactions on Applied Perception (TAP)     Hybrid Journal   (Followers: 6)
ACM Transactions on Architecture and Code Optimization (TACO)     Hybrid Journal   (Followers: 9)
ACM Transactions on Autonomous and Adaptive Systems (TAAS)     Hybrid Journal   (Followers: 7)
ACM Transactions on Computation Theory (TOCT)     Hybrid Journal   (Followers: 12)
ACM Transactions on Computational Logic (TOCL)     Hybrid Journal   (Followers: 3)
ACM Transactions on Computer Systems (TOCS)     Hybrid Journal   (Followers: 18)
ACM Transactions on Computer-Human Interaction     Hybrid Journal   (Followers: 15)
ACM Transactions on Computing Education (TOCE)     Hybrid Journal   (Followers: 6)
ACM Transactions on Design Automation of Electronic Systems (TODAES)     Hybrid Journal   (Followers: 2)
ACM Transactions on Economics and Computation     Hybrid Journal  
ACM Transactions on Embedded Computing Systems (TECS)     Hybrid Journal   (Followers: 4)
ACM Transactions on Information Systems (TOIS)     Hybrid Journal   (Followers: 21)
ACM Transactions on Intelligent Systems and Technology (TIST)     Hybrid Journal   (Followers: 8)
ACM Transactions on Interactive Intelligent Systems (TiiS)     Hybrid Journal   (Followers: 4)
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)     Hybrid Journal   (Followers: 10)
ACM Transactions on Reconfigurable Technology and Systems (TRETS)     Hybrid Journal   (Followers: 7)
ACM Transactions on Sensor Networks (TOSN)     Hybrid Journal   (Followers: 9)
ACM Transactions on Speech and Language Processing (TSLP)     Hybrid Journal   (Followers: 10)
ACM Transactions on Storage     Hybrid Journal  
ACS Applied Materials & Interfaces     Full-text available via subscription   (Followers: 25)
Acta Automatica Sinica     Full-text available via subscription   (Followers: 3)
Acta Universitatis Cibiniensis. Technical Series     Open Access  
Ad Hoc Networks     Hybrid Journal   (Followers: 11)
Adaptive Behavior     Hybrid Journal   (Followers: 11)
Advanced Engineering Materials     Hybrid Journal   (Followers: 26)
Advanced Science Letters     Full-text available via subscription   (Followers: 9)
Advances in Adaptive Data Analysis     Hybrid Journal   (Followers: 7)
Advances in Artificial Intelligence     Open Access   (Followers: 16)
Advances in Calculus of Variations     Hybrid Journal   (Followers: 2)
Advances in Catalysis     Full-text available via subscription   (Followers: 6)
Advances in Computational Mathematics     Hybrid Journal   (Followers: 18)
Advances in Computer Science : an International Journal     Open Access   (Followers: 15)
Advances in Computing     Open Access   (Followers: 2)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 52)
Advances in Engineering Software     Hybrid Journal   (Followers: 27)
Advances in Geosciences (ADGEO)     Open Access   (Followers: 11)
Advances in Human Factors/Ergonomics     Full-text available via subscription   (Followers: 27)
Advances in Human-Computer Interaction     Open Access   (Followers: 21)
Advances in Materials Sciences     Open Access   (Followers: 16)
Advances in Operations Research     Open Access   (Followers: 12)
Advances in Parallel Computing     Full-text available via subscription   (Followers: 7)
Advances in Porous Media     Full-text available via subscription   (Followers: 5)
Advances in Remote Sensing     Open Access   (Followers: 40)
Advances in Science and Research (ASR)     Open Access   (Followers: 6)
Advances in Technology Innovation     Open Access   (Followers: 4)
AEU - International Journal of Electronics and Communications     Hybrid Journal   (Followers: 8)
African Journal of Information and Communication     Open Access   (Followers: 8)
African Journal of Mathematics and Computer Science Research     Open Access   (Followers: 4)
Air, Soil & Water Research     Open Access   (Followers: 9)
AIS Transactions on Human-Computer Interaction     Open Access   (Followers: 6)
Algebras and Representation Theory     Hybrid Journal   (Followers: 1)
Algorithms     Open Access   (Followers: 11)
American Journal of Computational and Applied Mathematics     Open Access   (Followers: 5)
American Journal of Computational Mathematics     Open Access   (Followers: 4)
American Journal of Information Systems     Open Access   (Followers: 5)
American Journal of Sensor Technology     Open Access   (Followers: 4)
Anais da Academia Brasileira de Ciências     Open Access   (Followers: 2)
Analog Integrated Circuits and Signal Processing     Hybrid Journal   (Followers: 7)
Analysis in Theory and Applications     Hybrid Journal   (Followers: 1)
Animation Practice, Process & Production     Hybrid Journal   (Followers: 5)
Annals of Combinatorics     Hybrid Journal   (Followers: 3)
Annals of Data Science     Hybrid Journal   (Followers: 11)
Annals of Mathematics and Artificial Intelligence     Hybrid Journal   (Followers: 12)
Annals of Pure and Applied Logic     Open Access   (Followers: 2)
Annals of Software Engineering     Hybrid Journal   (Followers: 13)
Annual Reviews in Control     Hybrid Journal   (Followers: 6)
Anuario Americanista Europeo     Open Access  
Applicable Algebra in Engineering, Communication and Computing     Hybrid Journal   (Followers: 2)
Applied and Computational Harmonic Analysis     Full-text available via subscription   (Followers: 1)
Applied Artificial Intelligence: An International Journal     Hybrid Journal   (Followers: 13)
Applied Categorical Structures     Hybrid Journal   (Followers: 2)
Applied Clinical Informatics     Hybrid Journal   (Followers: 2)
Applied Computational Intelligence and Soft Computing     Open Access   (Followers: 12)
Applied Computer Systems     Open Access   (Followers: 2)
Applied Informatics     Open Access  
Applied Mathematics and Computation     Hybrid Journal   (Followers: 33)
Applied Medical Informatics     Open Access   (Followers: 10)
Applied Numerical Mathematics     Hybrid Journal   (Followers: 5)
Applied Soft Computing     Hybrid Journal   (Followers: 15)
Applied Spatial Analysis and Policy     Hybrid Journal   (Followers: 5)
Architectural Theory Review     Hybrid Journal   (Followers: 3)
Archive of Applied Mechanics     Hybrid Journal   (Followers: 5)
Archive of Numerical Software     Open Access  
Archives and Museum Informatics     Hybrid Journal   (Followers: 137)
Archives of Computational Methods in Engineering     Hybrid Journal   (Followers: 4)
Artifact     Hybrid Journal   (Followers: 2)
Artificial Life     Hybrid Journal   (Followers: 7)
Asia Pacific Journal on Computational Engineering     Open Access  
Asia-Pacific Journal of Information Technology and Multimedia     Open Access   (Followers: 1)
Asian Journal of Computer Science and Information Technology     Open Access  
Asian Journal of Control     Hybrid Journal  
Assembly Automation     Hybrid Journal   (Followers: 2)
at - Automatisierungstechnik     Hybrid Journal   (Followers: 1)
Australian Educational Computing     Open Access   (Followers: 1)
Automatic Control and Computer Sciences     Hybrid Journal   (Followers: 4)
Automatic Documentation and Mathematical Linguistics     Hybrid Journal   (Followers: 5)
Automatica     Hybrid Journal   (Followers: 11)
Automation in Construction     Hybrid Journal   (Followers: 6)
Autonomous Mental Development, IEEE Transactions on     Hybrid Journal   (Followers: 9)
Basin Research     Hybrid Journal   (Followers: 5)
Behaviour & Information Technology     Hybrid Journal   (Followers: 52)
Biodiversity Information Science and Standards     Open Access  
Bioinformatics     Hybrid Journal   (Followers: 287)
Biomedical Engineering     Hybrid Journal   (Followers: 15)
Biomedical Engineering and Computational Biology     Open Access   (Followers: 14)
Biomedical Engineering, IEEE Reviews in     Full-text available via subscription   (Followers: 18)
Biomedical Engineering, IEEE Transactions on     Hybrid Journal   (Followers: 34)
Briefings in Bioinformatics     Hybrid Journal   (Followers: 47)
British Journal of Educational Technology     Hybrid Journal   (Followers: 139)
Broadcasting, IEEE Transactions on     Hybrid Journal   (Followers: 10)
c't Magazin fuer Computertechnik     Full-text available via subscription   (Followers: 2)
CALCOLO     Hybrid Journal  
Calphad     Hybrid Journal  
Canadian Journal of Electrical and Computer Engineering     Full-text available via subscription   (Followers: 14)
Capturing Intelligence     Full-text available via subscription  
Catalysis in Industry     Hybrid Journal   (Followers: 1)
CEAS Space Journal     Hybrid Journal   (Followers: 2)
Cell Communication and Signaling     Open Access   (Followers: 2)
Central European Journal of Computer Science     Hybrid Journal   (Followers: 5)
CERN IdeaSquare Journal of Experimental Innovation     Open Access   (Followers: 1)
Chaos, Solitons & Fractals     Hybrid Journal   (Followers: 3)
Chemometrics and Intelligent Laboratory Systems     Hybrid Journal   (Followers: 14)
ChemSusChem     Hybrid Journal   (Followers: 7)
China Communications     Full-text available via subscription   (Followers: 7)
Chinese Journal of Catalysis     Full-text available via subscription   (Followers: 2)
CIN Computers Informatics Nursing     Full-text available via subscription   (Followers: 11)
Circuits and Systems     Open Access   (Followers: 15)
Clean Air Journal     Full-text available via subscription   (Followers: 2)
CLEI Electronic Journal     Open Access  
Clin-Alert     Hybrid Journal   (Followers: 1)
Cluster Computing     Hybrid Journal   (Followers: 1)
Cognitive Computation     Hybrid Journal   (Followers: 4)
COMBINATORICA     Hybrid Journal  
Combustion Theory and Modelling     Hybrid Journal   (Followers: 14)
Communication Methods and Measures     Hybrid Journal   (Followers: 12)
Communication Theory     Hybrid Journal   (Followers: 20)
Communications Engineer     Hybrid Journal   (Followers: 1)
Communications in Algebra     Hybrid Journal   (Followers: 3)
Communications in Partial Differential Equations     Hybrid Journal   (Followers: 3)
Communications of the ACM     Full-text available via subscription   (Followers: 55)
Communications of the Association for Information Systems     Open Access   (Followers: 18)
COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering     Hybrid Journal   (Followers: 3)
Complex & Intelligent Systems     Open Access   (Followers: 1)
Complex Adaptive Systems Modeling     Open Access  
Complex Analysis and Operator Theory     Hybrid Journal   (Followers: 2)
Complexity     Hybrid Journal   (Followers: 6)
Complexus     Full-text available via subscription  
Composite Materials Series     Full-text available via subscription   (Followers: 9)
Computación y Sistemas     Open Access  
Computation     Open Access  
Computational and Applied Mathematics     Hybrid Journal   (Followers: 2)
Computational and Mathematical Methods in Medicine     Open Access   (Followers: 2)
Computational and Mathematical Organization Theory     Hybrid Journal   (Followers: 2)
Computational and Structural Biotechnology Journal     Open Access   (Followers: 2)
Computational and Theoretical Chemistry     Hybrid Journal   (Followers: 9)
Computational Astrophysics and Cosmology     Open Access   (Followers: 1)
Computational Biology and Chemistry     Hybrid Journal   (Followers: 11)
Computational Chemistry     Open Access   (Followers: 2)
Computational Cognitive Science     Open Access   (Followers: 2)
Computational Complexity     Hybrid Journal   (Followers: 4)
Computational Condensed Matter     Open Access  
Computational Ecology and Software     Open Access   (Followers: 9)
Computational Economics     Hybrid Journal   (Followers: 9)
Computational Geosciences     Hybrid Journal   (Followers: 15)
Computational Linguistics     Open Access   (Followers: 22)
Computational Management Science     Hybrid Journal  
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 8)
Computational Mechanics     Hybrid Journal   (Followers: 4)
Computational Methods and Function Theory     Hybrid Journal  
Computational Molecular Bioscience     Open Access   (Followers: 2)
Computational Optimization and Applications     Hybrid Journal   (Followers: 7)
Computational Particle Mechanics     Hybrid Journal   (Followers: 1)
Computational Research     Open Access   (Followers: 1)
Computational Science and Discovery     Full-text available via subscription   (Followers: 2)
Computational Science and Techniques     Open Access  
Computational Statistics     Hybrid Journal   (Followers: 14)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 30)
Computer     Full-text available via subscription   (Followers: 91)
Computer Aided Surgery     Hybrid Journal   (Followers: 5)
Computer Applications in Engineering Education     Hybrid Journal   (Followers: 8)
Computer Communications     Hybrid Journal   (Followers: 10)
Computer Engineering and Applications Journal     Open Access   (Followers: 5)
Computer Journal     Hybrid Journal   (Followers: 9)
Computer Methods in Applied Mechanics and Engineering     Hybrid Journal   (Followers: 22)
Computer Methods in Biomechanics and Biomedical Engineering     Hybrid Journal   (Followers: 12)
Computer Methods in the Geosciences     Full-text available via subscription   (Followers: 2)
Computer Music Journal     Hybrid Journal   (Followers: 18)
Computer Physics Communications     Hybrid Journal   (Followers: 6)
Computer Science - Research and Development     Hybrid Journal   (Followers: 8)
Computer Science and Engineering     Open Access   (Followers: 19)
Computer Science and Information Technology     Open Access   (Followers: 13)
Computer Science Education     Hybrid Journal   (Followers: 14)
Computer Science Journal     Open Access   (Followers: 22)

        1 2 3 4 5 6 | Last

Journal Cover Applied Soft Computing
  [SJR: 1.763]   [H-I: 75]   [15 followers]  Follow
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 1568-4946
   Published by Elsevier Homepage  [3123 journals]
  • Towards an integrated evolutionary strategy and artificial neural network
           computational tool for designing photonic coupler devices
    • Abstract: Publication date: April 2018
      Source:Applied Soft Computing, Volume 65
      Author(s): Adriano da Silva Ferreira, Carlos Henrique da Silva Santos, Marcos Sergio Gonçalves, Hugo Enrique Hernández Figueroa
      Photonics has been widely explored in computing and communications, mainly to rationalize the relationship between device size minimization and data processing/transmission maximization. Generally driven by optimization and modeling techniques, the design of photonic devices is often performed by bio-inspired algorithms integrated to electromagnetic solvers, which have achieved advances but is still time-consuming. As an alternative to a costly finite element method (FEM) solver, a multilayer perceptron (MLP) neural network is proposed for computing power coupling efficiency of photonic couplers, originally designed through an integrated evolutionary strategy (ES) and FEM routine. We address the ES-FEM design of two efficient couplers, present the MLP implementation and the MLP training and testing over the routine generated datasets, and measure MLP and FEM runtime. MLP suitably predicted the power coupling efficiency of a variety of unknown couplers on tests. The measured runtime showed MLP is ∼105 faster than FEM. In conclusion, MLP is a potential tool to be integrated to ES on the design of such photonic couplers.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • A GPU-accelerated parallel Jaya algorithm for efficiently estimating
           Li-ion battery model parameters
    • Abstract: Publication date: April 2018
      Source:Applied Soft Computing, Volume 65
      Author(s): Long Wang, Zijun Zhang, Chao Huang, Kwok Leung Tsui
      A parallel Jaya algorithm implemented on the graphics processing unit (GPU-Jaya) is proposed to estimate parameters of the Li-ion battery model in this paper. Similar to the generic Jaya algorithm (G-Jaya), the GPU-Jaya is free of tuning algorithm-specific parameters. Compared with the G-Jaya algorithm, three main procedures of the GPU-Jaya, the solution update, fitness value computation, and the best/worst solution selection are all computed in parallel on GPU via a compute unified device architecture (CUDA). Two types of memories of CUDA, the global memory and the shared memory are utilized in the execution. The effectiveness of the proposed GPU-Jaya algorithm in estimating model parameters of two Li-ion batteries is validated via real experiments while its high efficiency is demonstrated by comparing with the G-Jaya and other considered benchmarking algorithms. The experimental results reflect that the GPU-Jaya algorithm can accurately estimate battery model parameters while tremendously reduce the execution time using both entry-level and professional GPUs.

      PubDate: 2018-02-05T06:14:53Z
  • Surrogate modeling based on granular models and fuzzy aptitude functions
    • Abstract: Publication date: April 2018
      Source:Applied Soft Computing, Volume 65
      Author(s): Israel Cruz-Vega, Carlos Reyes Garcia, Hugo Jair Escalante, Jose de Jesus Rangel-Magdaleno, Juan Manuel Ramirez Cortes
      Genetic algorithms are part of a family of heuristic optimization techniques that are nature inspired, and unlike many optimization techniques, are derivative-free. The effectiveness of this sort of algorithms has been proved in several domains and applications. However, when using GA in complex optimization problems, e.g., like those arising in engineering, the high dimensional space of solutions and the expensiveness of fitness functions produce heavy computational loads. An alternative to this drawback is using estimation techniques, known as surrogates, that provide approximated, but cheap evaluations of solutions. Making optimization tractable even for complex problems. In this paper, we focus on the use of a fuzzy system as an agent to construct surrogate models in the form of granules to be used with genetic algorithms. The novelty of this work relies on the extraction of knowledge from the search process and on the representation of granule's behavior with the aid of fuzzy aptitude functions. These functions control the behavior of granules, allowing the optimization technique considerable savings in terms of resources. First, by avoiding unnecessary evaluations of solutions that are far away from the optimal one, with the use of a granular global-surrogate model. At the same time, the methodology allows us working with granular local-surrogate models when the process requires a more intensive search around a specific region of the search space. Experimental results on benchmark functions show the validity and usefulness of the proposed techniques.

      PubDate: 2018-02-05T06:14:53Z
  • A novel hybrid genetic algorithm with granular information for feature
           selection and optimization
    • Abstract: Publication date: April 2018
      Source:Applied Soft Computing, Volume 65
      Author(s): Hongbin Dong, Tao Li, Rui Ding, Jing Sun
      Feature selection has been a significant task for data mining and pattern recognition. It aims to choose the optimal feature subset with the minimum redundancy and the maximum discriminating ability. This paper analyzes the feature selection method from two aspects of data and algorithm. In order to deal with the redundant features and irrelevant features in high-dimensional & low-sample data and low-dimensional & high-sample data, the feature selection algorithm model based on the granular information is presented in this paper. Thus, our research examines experimentally how granularity level affects both the classification accuracy and the size of feature subset for feature selection. First of all, the improved binary genetic algorithm with feature granulation (IBGAFG) is used to select the significant features. Then, the improved neighborhood rough set with sample granulation (INRSG) is proposed under different granular radius, which further improves the quality of the feature subset. Finally, in order to find out the optimal granular radius, granularity λ optimization based on genetic algorithm (ROGA) is presented. The optimal granularity parameters are found adaptively according to the feedback of classification accuracy. The performance of the proposed algorithms is tested upon eleven publicly available data sets and is compared with other supervisory methods or evolutionary algorithms. Additionally, the ROGA algorithm is applied to the enterprise financial dataset, which can select the features that affect the financial status. Experiment results demonstrate that the approaches are efficient and can provide higher classification accuracy using granular information.

      PubDate: 2018-02-05T06:14:53Z
  • Bernstein polynomials for adaptive evolutionary prediction of short-term
           time series
    • Abstract: Publication date: April 2018
      Source:Applied Soft Computing, Volume 65
      Author(s): Kristina Lukoseviciute, Rita Baubliene, Daniel Howard, Minvydas Ragulskis
      We introduce a short-term time series prediction model by means of evolutionary algorithms and Bernstein polynomials. This adapts Bernstein-type algebraic skeletons to extrapolate and predict short time series. A mixed smoothing strategy is used to achieve the necessary balance between the roughness of the algebraic prediction and the smoothness of the moving average. Computational experiments with standardized real world time series illustrate the accuracy of this approach to short-term prediction.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • Multi-objective simplified swarm optimization with weighting scheme for
           gene selection
    • Abstract: Publication date: April 2018
      Source:Applied Soft Computing, Volume 65
      Author(s): Chyh-Ming Lai
      Gene selection can be regarded as a multi-objective problem which involves both minimizing the size of a gene subset and maximizing the prediction performance. This work proposes a hybrid filter/wrapper method for gene selection based on multi-objective optimization. In this method, an emerging aggregate filter method is adopted as a filter with which to choose the most informative genes; in addition, a multi-objective simplified swarm optimization (MOSSO) is proposed and integrated with a support vector machine as a wrapper to seek an optimal gene subset from the selected genes. Unlike most current multi-objective based methods employed to handle gene selection problems, the proposed MOSSO uses a weighting scheme to guide the search towards the interesting regions as defined by the preference, which means that not all Pareto optimal solutions are generated, but only the ones gene selection prefers. The proposed method is validated using ten gene expression datasets, and the corresponding results are compared with those obtained with existing works. Statistical analysis indicates that the proposed method is highly competitive and, can be considered a promising alternative for dealing with gene selection problems.

      PubDate: 2018-02-05T06:14:53Z
  • Evolutionary optimization of convolutional neural networks for cancer
           miRNA biomarkers classification
    • Abstract: Publication date: April 2018
      Source:Applied Soft Computing, Volume 65
      Author(s): Alejandro Lopez-Rincon, Alberto Tonda, Mohamed Elati, Olivier Schwander, Benjamin Piwowarski, Patrick Gallinari
      Cancer diagnosis is currently undergoing a paradigm shift with the incorporation of molecular biomarkers as part of routine diagnostic panel. This breakthrough discovery directs researches to examine the role of microRNA in cancer, since its deregulation is often associated with almost all human tumors. Such differences frequently recur in tumor-specific microRNA signatures, which are helpful to diagnose tissue of origin and tumor subtypes. Nonetheless, the resulting classification problem is far from trivial, as there are hundreds of microRNA types, and tumors are non-linearly correlated to the presence of several overexpressions. In this paper, we propose to apply an evolutionary optimized convolutional neural network classifier to this complex task. The presented approach is compared against 21 state-of-the-art classifiers, on a real-world dataset featuring 8129 patients, for 29 different classes of tumors, using 1046 different biomarkers. As a result of the comparison, we also present a meta-analysis on the dataset, identifying the classes on which the collective performance of the considered classifiers is less effective, and thus possibly singling out types of tumors for which biomarker tests might be less reliable.

      PubDate: 2018-02-05T06:14:53Z
  • A maximum power point tracking method for PV system with improved
           gravitational search algorithm
    • Abstract: Publication date: April 2018
      Source:Applied Soft Computing, Volume 65
      Author(s): Ling-Ling Li, Guo-Qian Lin, Ming-Lang Tseng, Kimhua Tan, Ming K. Lim
      Photovoltaic (PV) system has gradually become research focus in the field of renewable energy power generation, and the output efficiency of PV system is the major concern of researchers. There are obvious non-linear characteristics in the output of PV system, and it will be greatly affected by external environment. For achieving the maximum output power, PV system must operate under the guidance of maximum power point tracking (MPPT) methods The tracking time and accuracy of these methods need to be improved. Therefore, this study contributes to increase output efficiency of PV system by improving the tracking time and accuracy of existing MPPT methods Specifically, a MPPT method with improved gravitational search algorithm (IGSA-MPPT) was proposed. The dynamic weight was added in the change factor of the gravity constant and the related factors of memory and population information exchange were added into the updating formula of particle velocity. IGSA-MPPT not only reduced the tracking time, but also improved the tracking accuracy and mitigated the fluctuations of the reference voltage. Finally, simulation results are compared with the of MPPT methods with particle swarm Optimization (PSO-MPPT) and gravitational search algorithm (GSA-MPPT). The average tracking time of IGSA-MPPT was reduced by 0.023 s and 0.0116s, and the average increase rates of maximum power were increased by 1.7071% and 0.7001% compared with PSO-MPPT and GSA-MPPT. In the simulations of PV system under the varying irradiance and temperature, the tracking speed and tracking accuracy of IGSA-MPPT were higher than those of PSO-MPPT, GSA-MPPT, GWO-MPPT, ICO-MPPT, and FCGSA-MPPT. In summary, IGSA-MPPT has better performance in tracking time and accuracy than other comparison algorithms. It can improve output efficiency of PV system in practical application.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • A group decision making model based on triangular fuzzy additive
           reciprocal matrices with additive approximation-consistency
    • Abstract: Publication date: April 2018
      Source:Applied Soft Computing, Volume 65
      Author(s): Fang Liu, Zu-Lin Liu, Yu-Hao Wu
      A group decision making (GDM) model is proposed when the experts evaluate their opinions through triangular fuzzy numbers. First, it is pointed out that the preference relations with triangular fuzzy numbers are inconsistent in nature. In order to distinguish the typical consistency, the concept of additive approximation-consistency is proposed for triangular fuzzy additive reciprocal matrices. The properties of triangular fuzzy additive reciprocal matrices with additive approximation-consistency are studied in detail. Second, using (n − 1) restricted preference values, a triangular fuzzy additive reciprocal preference relation with additive approximation-consistency is constructed. Third, a novel compatibility degree among triangular fuzzy additive reciprocal preference relations is defined. It is further applied to introduce the compatibility-degree induced ordered weighted averaging (CD-IOWA) operator for generating a collective triangular fuzzy additive reciprocal matrix with additive approximation-consistency. Finally, a new algorithm for the group decision-making problem with triangular fuzzy additive reciprocal preference relations is presented. A numerical example is carried out to illustrate the proposed definitions and algorithm.

      PubDate: 2018-02-05T06:14:53Z
  • On risk measures and capital allocation for distributions depending on
           parameters with interval or fuzzy uncertainty
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Raluca Vernic
      Since risks are regarded as emerging from uncertainty, they can be modeled using probabilistic, interval or fuzzy methods. The probabilistic literature on risk measures, though well developed, quantifies the risks by single values, which could seem restrictive for risk managers who would like to have more insight into the phenomena, like, e.g., an interval covering the single value. Therefore, in this paper, we study the VaR and TVaR risk measures for distributions with parameters of interval type, further extended to fuzzy numbers. In particular, we concentrate on the class of location and/or scale parameters, showing that in this case, the resulting risk measures are also in the form of intervals or, respectively, fuzzy numbers. Moreover, we apply the results to the capital allocation problem and detail the procedure for the normal, Pareto and Farlie–Gumbel–Morgenstern particular distributions. The formulas are numerically illustrated on interval and fuzzy parameters for some classical distributions; in this sense, some applications on real data sets are discussed.

      PubDate: 2018-02-05T06:14:53Z
  • Portfolio rebalancing with respect to market psychology in a fuzzy
           environment: A case study in Tehran Stock Exchange
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Arash Khayamim, Abolfazl Mirzazadeh, Bahman Naderi
      While a vast amount of literature shows that psychological factors are major pricing determinants, portfolio optimization models ignore the emotional aspects of financial markets. Accordingly, this paper presents a two-stage portfolio rebalancing method to integrate mean-variance theory with market psychology. At the first stage, the psychological state of market participants is translated into a set of criteria to evaluate stocks, and then, in a fuzzy environment, the process of ratiocination used by technical analysts is simulated to assess the status of these criteria and determine under- and overvaluation possibilities of stocks. At the second stage, a fuzzy programming approach utilizes the calculated possibilities to revise an existing portfolio considering investor profile, transaction costs, and risk-free rate of return. An empirical study using the obtained data from Tehran Stock Exchange is employed to validate the designed method and compare it against several other investment strategies, including Buy-and-Hold strategy and a conventional portfolio rebalancing model. The results show that the proposed fuzzy method responds appropriately to the psychological component of the market. In addition, for all investor profiles, the recommended strategy completely outperforms the market and the remaining strategies.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • Proposing a centralized algorithm to minimize message broadcasting energy
           in wireless sensor networks using directional antennas
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Mohsen FallahHoseini, Reza Rafeh
      Wireless Sensor Networks(WSN) are utilized in many fields such as environmental monitoring and military applications. The nodes of WSNs are not rechargeable, so energy conservation in these networks is important. One of the important issues in these networks is to optimize energy in message broadcasting. Depending on the ability of nodes and antennas, broadcasting is done in two means: directional and omni-directional antennas. There are centralized algorithms to broadcast message in wireless networks either by directional or omni-directional antennas. The problem of minimizing energy in broadcasting and multicasting is Non-polynomial-hard. In this paper, a centralized algorithm is proposed to improve energy and running time of the algorithm by using directional antennas. As evolutionary algorithms by omni-directional antenna are better than heuristic algorithms in terms of the time and the average result; a new approach based on particle swarm optimization (PSO) as an evolutionary algorithm is proposed in this paper. We have also considered and evaluated most of famous evolutionary algorithms such as Simulated Annealing (SA), genetic algorithm (GA), Teaching-Learning-Based Optimization (TLBO), Harmony Search (HS) and Ant Colony Optimization (ACO). The experiment results indicate that the proposed method is effective especially in term of energy conservation.

      PubDate: 2018-02-05T06:14:53Z
  • Efficiency of bio- and socio-inspired optimization algorithms for axial
           turbomachinery design
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Mohamed Abdessamed Ait Chikh, Idir Belaidi, Sofiane Khelladi, José Paris, Michael Deligant, Farid Bakir
      Turbomachinery design is a complex problem which requires a lot of experience. The procedure may be speed up by the development of new numerical tools and optimization techniques. The latter rely on the parameterization of the geometry, a model to assess the performance of a given geometry and the definition of an objective functions and constraints to compare solutions. In order to improve the reference machine performance, two formulations including the off-design have been developed. The first one is the maximization of the total nominal efficiency. The second one consists to maximize the operation area under the efficiency curve. In this paper five optimization methods have been assessed for axial pump design: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Cuckoo Search (CS), Teaching Learning Based Optimization (TLBO) and Sequential Linear Programming (SLP). Four non-intrusive methods and the latter intrusive. Given an identical design point and set of constraints, each method proposed an optimized geometry. Their computing time, the optimized geometry and its performances (flow rate, head (H), efficiency (η), net pressure suction head (NPSH) and power) are compared. Although all methods would converge to similar results and geometry, it is not the case when increasing the range and number of constraints. The discrepancy in geometries and the variety of results are presented and discussed. The computational fluid dynamics (CFD) is used to validate the reference and optimized machines performances in two main formulations. The most adapted approach is compared with some existing approaches in literature.

      PubDate: 2018-02-05T06:14:53Z
  • Tree-seed algorithm for solving optimal power flow problem in large-scale
           power systems incorporating validations and comparisons
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Attia A. El-Fergany, Hany M. Hasanien
      This paper presents a novel application of the tree-seed algorithm (TSA) for solving the optimal power flow (OPF) problem in large-scale electric power systems. The objective function is adopted to minimize sequentially; total fuel cost, real power loss of the system, and total voltage deviation in the given power system networks. The generators output real power, the generators voltage, tap settings of the transformers, and capacitive reactive compensating devices, define the search space for the OPF problem. The TSA is used to define the optimal values of the design discrete and continuous control variables. The proposed algorithm is applied to solve the OPF problem for different standard power networks such as the IEEE 57-bus, and 300-bus systems with different case studies. The numerical simulated results are extensively verified through complete performance measurements with necessary subsequent discussions. The achieved results confirm the effectiveness, flexibility, and applicability of the proposed TSA-based OPF methodology in comparisons to other recent competing heuristic-based algorithms in the literature.

      PubDate: 2018-02-05T06:14:53Z
  • Two-stage genetic algorithm for parallel machines scheduling problem:
           Cyclic steam stimulation of high viscosity oil reservoirs
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Leonid Sheremetov, Jorge Martínez-Muñoz, Manuel Chi-Chim
      In this paper, the problem of optimal assignment of trailer-mounted steam generators for cyclic steam stimulation (CSS) of petroleum wells is formulated as a parallel uniform machines scheduling (PMS) problem with release dates. The total weighed tardiness is used as the goal of the optimization process. The distinctive features of the proposed PMS formulation include: jobs with variable weights, variable machine setup time, constraint capacity of drilling pads (where machines are allocated), and modified tardiness criterion. A two-stage scheduling algorithm combining heuristic and genetic algorithms is proposed for solving it. A chromosome representation, crossover and mutation operators generating only feasible solutions and thus avoiding the use of any repair mechanism are discussed. The performance of the algorithm is tested on a real-world data set from the oilfield asset located in the coastal swamps of the Gulf of Mexico. The experiments indicate that the proposed approach gives good results in optimization of the operational costs and petroleum recovery. The algorithm is implemented as a part of the software platform for optimization of CSS and currently is in use by oilfield engineers.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • A two-step multi-objectivization method for improved evolutionary
           optimization of industrial problems
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Anna Syberfeldt, Joel Rogström
      Multi-objectivization means that helper objectives are added to an optimization problem with the purpose of altering the search space in a way that improves the progress of the optimization algorithm. In this paper, a new method for multi-objectivization is proposed that is based on a two-step process. In the first step, a helper objective that conflicts with the main objective is added, and in the second step a helper objective that is in harmony with, but subservient to, the main objective is added. In contrast to existing methods for multi-objectivization, the proposed method aims at obtaining improved results in real-world optimizations by focusing on three aspects: (a) adding as little extra complexity to the problem as possible, (b) achieving an optimal balance between exploration and exploitation in order to promote an efficient search, and (c) ensuring that the main objective, which is of main interest to the user, is always prioritized. Results from evaluating the proposed method on a complex real-world scheduling problem and a theoretical benchmark problem show that the method outperforms both a traditional single-objective approach and the prevailing method for multi-objectivization. Besides describing the proposed method, the paper also outlines interesting aspects of multi-objectivization to investigate in the future.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • Partial order label decomposition approaches for melanoma diagnosis
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Javier Sánchez-Monedero, María Pérez-Ortiz, Aurora Sáez, Pedro Antonio Gutiérrez, César Hervás-Martínez
      Melanoma is a type of cancer that develops from the pigment-containing cells known as melanocytes. Usually occurring on the skin, early detection and diagnosis is strongly related to survival rates. Melanoma recognition is a challenging task that nowadays is performed by well trained dermatologists who may produce varying diagnosis due to the task complexity. This motivates the development of automated diagnosis tools, in spite of the inherent difficulties (intra-class variation, visual similarity between melanoma and non-melanoma lesions, among others). In the present work, we propose a system combining image analysis and machine learning to detect melanoma presence and severity. The severity is assessed in terms of melanoma thickness, which is measured by the Breslow index. Previous works mainly focus on the binary problem of detecting the presence of the melanoma. However, the system proposed in this paper goes a step further by also considering the stage of the lesion in the classification task. To do so, we extract 100 features that consider the shape, colour, pigment network and texture of the benign and malignant lesions. The problem is tackled as a five-class classification problem, where the first class represents benign lesions, and the remaining four classes represent the different stages of the melanoma (via the Breslow index). Based on the problem definition, we identify the learning setting as a partial order problem, in which the patterns belonging to the different melanoma stages present an order relationship, but where there is no order arrangement with respect to the benign lesions. Under this assumption about the class topology, we design several proposals to exploit this structure and improve data preprocessing. In this sense, we experimentally demonstrate that those proposals exploiting the partial order assumption achieve better performance than 12 baseline nominal and ordinal classifiers (including a deep learning model) which do not consider this partial order. To deal with class imbalance, we additionally propose specific over-sampling techniques that consider the structure of the problem for the creation of synthetic patterns. The experimental study is carried out with clinician-curated images from the Interactive Atlas of Dermoscopy, which eases reproducibility of experiments. Concerning the results obtained, in spite of having augmented the complexity of the classification problem with more classes, the performance of our proposals in the binary problem is similar to the one reported in the literature.

      PubDate: 2018-02-05T06:14:53Z
  • MS-SVM: Minimally Spanned Support Vector Machine
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Rupan Panja, Nikhil R. Pal
      For a Support Vector Machine (SVM) algorithm, the time required for classifying an unknown data point is proportional to the number of support vectors. For some real time applications, use of SVM could be a problem if the number of support vectors is high. Depending on the complexity of the class structure, sometimes the number of support vectors of a SVM model increases with the number of training data points. Here our objective is to reduce the number of support vectors, yet, maintaining more of less the same level of accuracy as that of a normal SVM that does not use any reduction of support vectors. An SVM finds a separating hyperplane maximizing the margin of separation and hence, the location of the hyperplane is primarily dependent on a set of “boundary points”. Here, we first identify some boundary points using a minimum spanning tree on the training data to obtain a reduced training set. The SVM algorithm is then applied on the reduced training data to generate the classification model. We call this algorithm, Minimally Spanned Support Vector Machine (MS-SVM). We also assess the performance by relaxing the definition of boundary points. Moreover we extend the algorithm to a feature space using a kernel transformation. In this case, an MST is generated in the feature space using the associated kernel matrix. Our experimental results demonstrate that the proposed algorithm can considerably reduce the number of support vectors without affecting the overall classification accuracy. This is true irrespective of whether the MST is generated in the input space or in the feature space. Thus the MS-SVM algorithm can be used instead of SVM for efficient classification.

      PubDate: 2018-02-05T06:14:53Z
  • On the use of local search heuristics to improve GES-based Bayesian
           network learning
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Juan I. Alonso, Luis de la Ossa, José A. Gámez, José M. Puerta
      Bayesian networks learning is computationally expensive even in the case of sacrificing the optimality of the result. Many methods aim at obtaining quality solutions in affordable times. Most of them are based on local search algorithms, as they allow evaluating candidate networks in a very efficient way, and can be further improved by using local search-based metaheuristics to avoid getting stuck in local optima. This approach has been successfully applied in searching for network structures in the space of directed acyclic graphs. Other algorithms search for the networks in the space of equivalence classes. The most important of these is GES (greedy equivalence search). It guarantees obtaining the optimal network under certain conditions. However, it can also get stuck in local optima when learning from datasets with limited size. This article proposes the use of local search-based metaheuristics as a way to improve the behaviour of GES in such circumstances. These methods also guarantee asymptotical optimality, and the experiments show that they improve upon the score of the networks obtained with GES.

      PubDate: 2018-02-05T06:14:53Z
  • Automatic regression methods for formulation of elastic modulus of
           recycled aggregate concrete
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Emadaldin Mohammadi Golafshani, Ali Behnood
      The use of recycled concrete aggregate to produce new concrete can assist the sustainability in construction industry. However, the mechanical properties of this type of aggregate should be precisely investigated before its using in different applications. The elastic modulus of concrete is one of the most important design parameters in many construction applications. Because of various mix designs, the existing formulas for the elastic modulus of concrete cannot be used for recycled aggregate concrete (RAC). In recent years, there have been a few attempts for predicting the elastic modulus of RAC, especially, with various types of artificial intelligence (AI) methods: In this paper, three automatic regression methods, namely, genetic programming (GP), artificial bee colony programming (ABCP) and biogeography-based programming (BBP) were used for estimating the elastic modulus of RAC. Performances of the different automatic regression models were compared with each other. Moreover, the sensitivity analysis was performed to assess the trend of the elastic modulus as a function of effective input parameters used for developing the different automatic regression models. Overall, the results show that GP, ABCP, and BBP can be used as reliable algorithms for prediction of the elastic modulus of RAC. In addition, the water absorption of the mixed coarse aggregate and the ratio of the fine aggregate to the total aggregate were found as two of the most effective parameters affecting the elastic modulus of RAC.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • Cutset-type possibilistic c-means clustering algorithm
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Haiyan Yu, Jiulun Fan
      The possibilistic c-means (PCM) clustering algorithm was introduced to avoid the sensitivity of the fuzzy c-means (FCM) clustering algorithm to outliers by relaxing the column sum constraint of the membership matrix of the FCM, namely the between-class relationships. The membership value is then interpreted as the typicality value of a data point in the PCM which reflects the absolute distance of data point to one cluster well. However, the PCM has a significant defect of coincident clustering problem because of relaxing the relations of clusters. In this paper, a novel cutset-type possibilistic clustering (C-PCM) algorithm is proposed. The C-PCM firstly generates a cluster core resulting from a β-cutset for each cluster and searches for data points located inside the cluster core. Then, the typicalities of these points to other clusters are modified, thus introducing the between-class relationships and avoiding coincident clusters. Simultaneously an adaptive determination method is also given for the parameter β in the C-PCM. Moreover, a novel segmentation method for images corrupted by salt-and-pepper noise is proposed by taking advantage of the strong robustness of the C-PCM to outliers. Several experiments are done on synthetic data-sets, high dimensional data sets, noisy images, which demonstrate the good performance of the proposed algorithm.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • Test-cost-sensitive rough set based approach for minimum weight vertex
           cover problem
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Xiaojun Xie, Xiaolin Qin, Chunqiang Yu, Xingye Xu
      The minimum vertex cover problem (MVCP) and minimum weighted vertex cover problem (MWVCP) have been used in a variety of applications. This paper focuses on a view of test-cost-sensitive rough set for MWVCP. We first provide a method to convert a minimum weight vertex cover of a graph into a minimal test cost attribute reduct of a test-cost-sensitive decision table. Then, an induced test-cost-sensitive decision table from an undirected weighted graph is established. On the foundation of the induced decision table, an improved heuristic algorithm for finding minimum weight vertex covers is proposed, it can avoid a mass of redundant computation. Furthermore, to improve efficiency, a quantum-behaved particle swarm optimization with immune mechanism is presented, which can avoid the phenomenon of premature, improve the global searching ability, and enhance the convergence speed. The results of the experiment show the advantages and limitations of the proposed algorithms compared with state-of-the-art algorithms.

      PubDate: 2018-02-05T06:14:53Z
  • Business process outsourcing enhanced by fuzzy linguistic consensus model
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Maria Vincenza Ciasullo, Giuseppe Fenza, Vincenzo Loia, Francesco Orciuoli, Orlando Troisi, Enrique Herrera-Viedma
      Business process outsourcing represents a strategic option to obtain the overall improvement of performance in business process management context. It consists in externalizing whole sub-processes (e.g., production, logistics, human resources) of a value chain. Last decade, the concept of value chain moved toward the more flexible concept of value net that implies the assembly of several value chains tailored to specifics, objectives, markets, etc. Thus, the composition of a value chain within a value net environments can be understood as the modeling of a macro business process in which sub-processes can be outsourced. Such composition activity foresees crucial decision-making moments that need to be sustained by a group of decision-makers owning several and heterogeneous competences in order to select the most suitable external providers to which delegate specific sub-processes. This work proposes a framework to enhance business process outsourcing by introducing group decision-making support that relies on a fuzzy linguistic consensus model. In addition, the framework implements algorithms to learn and assign different weights to decision-makers considering the context and time at which they participate in the group decision making. The framework is applied to an Italian footwear company by describing a numerical example.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • Ensemble of evolving data clouds and fuzzy models for weather time series
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Eduardo Soares, Pyramo Costa, Bruno Costa, Daniel Leite
      This paper describes a variation of data cloud-based intelligent method known as typicality-and-eccentricity-based method for data analysis (TEDA). The objective is to develop data-centric nonlinear and time-varying models to predict mean monthly temperature. TEDA is an incremental algorithm that considers the data density and scattering of clouds over the data space. The method does not require a priori knowledge of the dataset and user-defined parameters. However, if some knowledge about the number of clouds and rules is available, then it can be expressed through a single parameter. Past values of minimum, maximum and mean monthly temperature, as well as previous values of exogenous variables such as cloudiness, rainfall and humidity are considered in the analysis. A non-parametric Spearman correlation based method is proposed to rank and select the most relevant features and time delays for a more accurate prediction. The datasets were obtained from weather stations located in main Brazilian cities such as Sao Paulo, Manaus, Porto Alegre, and Natal. These cities are known to have particular weather characteristics. TEDA results are compared with results provided by the evolving Takagi–Sugeno (eTS) and the extended Takagi–Sugeno (xTS) methods. Additionally, an ensemble of cloud and fuzzy models and fuzzy aggregation operators is developed to give single-valued and granular predictions of the time series. Granular predictions convey a range of possible temperature values and give an idea about the error and uncertainty associated with the data.

      PubDate: 2018-02-05T06:14:53Z
  • Tool condition prognostics using logistic regression with penalization and
           manifold regularization
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Jianbo Yu
      Appropriate and timely maintenance decision for tool health degradation (i.e., wear) is significantly required to prevent severe degradation in product processing quality. Multiple sensor signals (e.g., vibration, acoustic emission) collected from tools contain much valuable information about their health states. However, information fusion of multiple sensor signals for assessing and predicting the tool health presents a big challenge. In this paper, logistic probability (LP) generated by logistic regression with manifold regularization (LRMR) is used to serve as a comprehensible indication to assess tool health state. Prognostic features are selected firstly by logistic regression with penalization regularization (LRPR) to improve the performance of the proposed tool health prognostics system. Based on the health indication values (i.e., LPs) and tool ages, the LR model is further developed to online construct the interior relationship between the tool health state and its ages, and then predicts the remaining useful life (RUL) of tools subjected to condition monitoring. The proposed prognostics system provides an adaptive learning scheme for assessment and prediction of tool health, and hence is easier to use in real-world applications. The experimental results on a tool life test-bed illustrate the potential applications of the proposed system for tool health prognostics.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • A hybrid swarm algorithm based on ABC and AIS for 2L-HFCVRP
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Defu Zhang, Ruibing Dong, Yain-Whar Si, Furong Ye, Qisen Cai
      This paper mainly addresses the heterogeneous fleet capacitated vehicle routing problem with two-dimensional loading constrains (2L-HFCVRP). The 2L-HFCVRP is a combination of two NP-hard problems and has a wide range of applications in transportation and logistics fields. In this paper, we propose a hybrid swarm algorithm, which is a combination of Artificial Bee Colony (ABC) algorithm and Artificial Immune System (AIS) algorithm, to solve the 2L-HFCVRP. The proposed algorithm is allowed to search infeasible solutions and several efficient strategies are developed to escape from local optima. The extensive computational results on several well-known benchmark data sets verify the effectiveness of the proposed algorithm. The proposed algorithm is shown to outperform the best algorithms in the literature for 2L-HFCVRP instances.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • On segmentation of images having multi-regions using Gaussian type radial
           basis kernel in fuzzy sets framework
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Noor Badshah, Ali Ahmad
      Segmentation of images having multi-objects with intensity inhomogeneity and noise is always challenging. In this paper, we propose a new model for segmentation of images having multi-objects with varying intensity. In the proposed model we develop a novel kernel metric which is based on generalized averages. To ensure its applicability in noisy images we use Gaussian type radial basis kernel. To speed up the convergence and to get global optima of the proposed model, we express energy functional of our model in fuzzy Pseudo level set formulation. The proposed model works well in images having multi-objects with intensity inhomogeneity and noise. Our proposed model also works very well in images having maximum, minimum or average intensity background. Instead of length term we use Gaussian smoothing for regularization of Pseudo level set (fuzzy membership function). Experimental results show better performance of the proposed model over existing state of the art models qualitatively and quantitatively (Jaccard similarity).
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • A multi-class learning method for multiconlitron using hybrid binary tree
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Qiangkui Leng, Yuping Qin, Yujian Li
      Multiconlitron is a general geometric method for constructing piecewise linear classifiers, but it was initially designed only for two-class problem. In this paper, we propose a multi-class learning method of multiconlitron by using a hybrid binary tree architecture. At each internal node that does not generate leaf nodes, a hyperplane is first created as perpendicular bisectors of line segment linking centroids of the two farthest classes from each other. Then, according to the positive or negative sides on the hyperplane, all the inherited classes are divided into two groups for the next iteration. For an internal node that will generate leaf nodes, the multiconlitron is constructed by support multiconlitron algorithm, which can separate one class from the other class (or group). Generally, the approximate hyperplane by centroids can provide fast division in the early stages of the training phase, whereas the ensemble boundaries with multiconlitron will perform the final precise decision. As a result, a hybrid binary partition tree is created which represents a hierarchical division of given classes. Experimental results show that the proposed method is better than one-versus-one multiconlitron and directed acyclic graph multiconlitron, both in terms of classification effectiveness and computational time. Moreover, comparison with another tree-based multi-class piecewise linear classifier verifies its competitiveness and superiority.

      PubDate: 2018-02-05T06:14:53Z
  • A neural approach under transfer learning for domain adaptation in
           land-cover classification using two-level cluster mapping
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Shounak Chakraborty, Moumita Roy
      In this article, a domain adaptation (DA) technique using artificial neural networks based classifiers has been proposed using two-level cluster mapping technique by integrating the common data transformation and transfer learning approaches in a single framework. Here, after applying self-organizing feature mapping based clustering technique, a semi-automatic threshold selection mechanism is used to separate out most-confidently paired source–target clusters (i.e. the most similar ones) and alien target clusters (i.e. non-similar ones). Moreover, this strategy makes the proposed technique eligible to apply the transfer learning mechanism. Thereafter, the samples from the most confidently paired target clusters are transformed in terms of the corresponding source clusters using an auto-encoder. Here, the labelled samples are collected from the corresponding source clusters for the paired target clusters; whereas the transfer learning technique is used to select labelled samples from the alien target clusters. To assess the effectiveness of the proposed DA approach, experiments are conducted on the three source–target datasets and the results are compared with other state-of-the-art techniques. Results are also found to be encouraging for the proposed technique.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • An improved hybrid ant particle optimization (IHAPO) algorithm for
           reducing travel time in VANETs
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Vinita Jindal, Punam Bedi
      With the increase in traffic volume day by day, cities are facing the problem of extreme congestion in most of the developing countries. The congestion on the road leads to increase in travel time and travel cost as well as having a significant impact on the health of people. This paper proposes a novel Improved Hybrid Ant Particle Optimization (IHAPO) algorithm for reducing the travel time for enabling smart transportation. The aim of the proposed algorithm is to select a best path in peak hours by avoiding the optimal path, if congested and resuming the optimal path when congestion eases. This algorithm is an improvement of the existing Modified Ant Colony Optimization (MACO) algorithm. It combines both MACO and Particle Swarm Optimization (PSO) algorithms using the global best exchange method. Initially, both algorithms work separately and produce their best solutions. Then a comparison has been made between both the solutions and a new global best solution found for the whole network. According to the best solution obtained, the position of both ants and particles are changed for the next iterations. MACO algorithm works under the assumption that all roads are in working condition, whereas the proposed IHAPO algorithm works under normal road conditions. Another difference between these algorithms is the pheromone update process that makes the new algorithm more effective. The proposed algorithm is tested on a map of North-West Delhi, India using Simulation of Urban MObility (SUMO) for traffic simulation. It was found that the travel time is reduced significantly by using the proposed IHAPO algorithm over the existing algorithms in consideration.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • Scheduling a realistic hybrid flow shop with stage skipping and adjustable
           processing time in steel plants
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Jianyu Long, Zhong Zheng, Xiaoqiang Gao, Panos M. Pardalos
      This paper studies a new realistic hybrid flow shop (HFS) scheduling problem with stage skipping and adjustable processing time in steelmaking- continuous casting (SCC) production process. The SCC scheduling problem is solved to determine the machine allocations, starting times and ending times for all operations of all charges (jobs). Through clarifying the production objectives and constraints related to stage skipping and adjustable processing time, a new SCC scheduling model is built. We develop an improved genetic algorithm (GA) to address the scheduling problem. For calibrating our GA, four encoding methods, two selection operators and three crossover operators, which are effective and widely used in regular HFS scheduling problems are compared and analyzed. In addition, a quality improvement approach is developed to embed into the GA to further optimize each solution obtained by the decoding heuristic. Moreover, to accelerate the local optimization and avoid premature convergence, a new elitist strategy and a restart strategy are employed in our GA. Computational experiments based on instances generated from a practical production process show that the proposed improved GA is effective for solving the SCC scheduling problem.

      PubDate: 2018-02-05T06:14:53Z
  • Bit-wise Pseudo-Bayes genetic algorithms to model data distributions
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Anton Aguilar-Rivera
      This work introduces a method to generate implicit models from data. The algorithm is based on Bayesian networks, discrete codification and genetic algorithms. The concept of bit-wise Pseudo-Bayes networks is introduced in this work. It refers to Bayesian networks generated from discretized data. The network describes the model in function of the correlations between bits. The model is consider implicit in the sense the meaning of the original variables is lost during discretization, but it can provide random samples with a distribution that is similar to the one of the original data. These samples can be the input of other algorithms that rely on samples of data in a transparent fashion. Moreover, this approach alleviates the problem of storing and handling of larges volumes of data, a common occurrence in modern data science, and circumvent the problems of identification process. The algorithm to generate bit-wise Pseudo-Bayes models is described in detail, and introduces innovations to representation of Bayesian networks based on extended chain structures. Also, it introduces new discretization methods and compares them to others reported in the literature, which have been mainly used to evolutionary continuous optimization. The performance of the proposed algorithm is studied using two data sets: prices data from the stock of the Dow Jones industrial average, and prices data from the stocks of the Mexican Índice de precios y cotizaciones. The proposed method is compared against other discrete modeling techniques reported in the literature, attaining a higher performance. The results indicate direct discretization methods were effective when they are coupled with Bayesian networks, but further research is needed to guarantee scalability. The study indicated the Dow Jones data set was more difficult than the Mexican index data set, but this was attributed to 2008 American crisis.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • A comparative study of improved GA and PSO in solving multiple traveling
           salesmen problem
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Honglu Zhou, Mingli Song, Witold Pedrycz
      Multiple traveling salesman problem (MTSP) is a generalization of the classic traveling salesman problem (TSP). Compared to TSP, MTSP is more common in real-life applications. In this paper, in order to solve the minsum MTSP with multiple depots, closed path, and the requirement of minimum number of cities each salesman should visit, we propose two partheno genetic algorithms (PGA). One is a PGA with roulette selection and elitist selection in which four new kinds of mutation operation are proposed. The other one, named IPGA, binds the selection and mutation together. A new selection operator and a more comprehensive mutation operator are used. The new mutation operator is based on the four kinds of mutation operation in PGA, which eliminates the mutation probability. For comparative analysis, we also adopt particle swarm optimization algorithm (PSO) and one state of the art method (invasive weed optimization algorithm from literature) to solve MTSP. The algorithms are validated with publicly available TSPLIB benchmarks. The performance is discussed and evaluated through a series of comparative experiments. IPGA is demonstrated to be superior in solving MTSP.

      PubDate: 2018-02-05T06:14:53Z
  • Five discrete symbiotic organisms search algorithms for simultaneous
           optimization of feature subset and neighborhood size of KNN classification
    • Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): T.W. Liao, R.J. Kuo
      This paper develops five new discrete Symbiotic Organisms Search (SOS) algorithms for simultaneous optimization of feature subset and neighborhood size of k-nearest neighbor model to improve classification accuracy. The first algorithm is a discrete version of the original SOS algorithm, named DSOS. The second is a hybrid derived from enhancing the first DSOS with paired swap local search, named DHSOS. The third is a cooperative hybrid between DSOS and a discrete particle swarm optimization (DPSO), named DSOSPSO. The fourth and fifth are modified from the second by adapting population size rather than fixing it, named APDHSOS and AP2DHSOS, respectively. Five existing metaheuristic algorithms are also implemented and extended for comparison. The performance of these algorithms employing the k-nearest neighbor classification model are evaluated in terms of classification error and computational time based on stratified k-fold cross validation with 11 datasets. The classification errors of five larger data sets are also obtained to further verify the test results. Based on the test results, it is found that: (1) feature selection with fixed neighborhood size yields lower errors than optimized neighborhood size without feature selection; (2) simultaneous optimization of feature subset and neighborhood size overall outperforms optimizing either feature subset or neighborhood size alone; and (3) among all the ten algorithms tested for simultaneous optimization, APDHSOS and AP2DHSOS emerge to tie the best in terms of classification error, implying that adaptive population size works better than fixing population size.
      Graphical abstract image

      PubDate: 2018-02-05T06:14:53Z
  • Evolutionary multi-objective optimization assisted by metamodels, kernel
           PCA and multi-criteria decision making techniques with applications in
    • Authors: Dimitrios Kapsoulis; Konstantinos Tsiakas; Xenofon Trompoukis; Varvara Asouti; Kyriakos Giannakoglou
      Pages: 1 - 13
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Dimitrios Kapsoulis, Konstantinos Tsiakas, Xenofon Trompoukis, Varvara Asouti, Kyriakos Giannakoglou
      This paper presents two methods which aim at improving the efficiency of evolutionary algorithms (EAs) and metamodel-assisted EAs (MAEAs) used to solve multi-objective optimization problems with computationally expensive evaluation tools. The EAs and MAEAs are accelerated by implementing the kernel principal component analysis (KPCA) during: (a) the application of the evolution operators, by processing the population members in a new/feature, rather than the standard design, space and/or (b) the metamodel training, to reduce the number of input units and, thus, get more accurate predictions. Over and above, a variant of EA (or MAEA) which takes the decision maker's (DM) preferences into consideration during the evolution is proposed. In contrast to standard multi-objective EAs which may insufficiently populate the preferred area(s) of the objective space, more non-dominated solutions are now driven towards them. This is achieved by using the multi-criteria decision making (MCDM) Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), which affects the parent selection and the non-dominated front trimming operators. The combined use of KPCA and TOPSIS is implemented, too. The proposed methods are evaluated by solving two computationally demanding aerodynamic shape optimization problems, with two objectives each.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.11.046
      Issue No: Vol. 64 (2017)
  • Using differential evolution for improving distance measures of nominal
    • Authors: Diab M. Diab; Khalil El Hindi
      Pages: 14 - 34
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Diab M. Diab, Khalil El Hindi
      Enhancing distance measures is the key to improve the performance of instance-based learning (IBL) and many machine learning (ML) algorithms. The value difference metrics (VDM) and inverted specific-class distance measure (ISCDM) are among the top performing distance measures that address nominal attribute. They use conditional probability terms to estimate the distance between nominal values; therefore, their accuracy mainly depends on the accurate estimation of these terms. An accurate estimation of conditional probability terms can be difficult if the training data is scarce. In this study, different metaheuristic approaches are used to find better estimations these terms for both VDM and ISCDM independently. We transform the conditional probability estimation problem into an optimization problem, and exploit three meta-heuristic approaches to solve it, namely, multi-parent differential evolution (MPDE), genetic algorithms (GA), and simulated annealing (SA). The goal of the objective function is to maximize the classification accuracy of the k-nearest neighbors (kNN) algorithm. We propose a new fine-tuning method which we name modified selective fine-tuning (MSFT) method, a new hybrid fine-tuning method (i.e., a combination of two fine-tuning methods), and three different ways for creating initial populations by manipulating the original estimated conditional probability terms used in VDM and ISCDM, and the fine-tuned conditional probability terms obtained from using other fine-tuning methods. We compare the performance of all approaches with the original distance measures using 53 general benchmark datasets. The experimental results show that the proposed methods significantly improve the classification and generalization accuracy of the VDM and ISCDM measures.

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.12.007
      Issue No: Vol. 64 (2017)
  • Closed-form solution based genetic algorithm software: Application to
           multiple cracks detection on beam structures by static tests
    • Authors: A. Greco; A. Pluchino; F. Cannizzaro; S. Caddemi; I. Caliò
      Pages: 35 - 48
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): A. Greco, A. Pluchino, F. Cannizzaro, S. Caddemi, I. Caliò
      In this paper a procedure for the static identification and reconstruction of concentrated damage distribution in beam-like structures, implemented in a dedicated software, is presented. The proposed damage identification strategy relies on the solution of an optimisation problem, by means of a genetic algorithm, which exploits the closed form solution based on the distribution theory of multi-cracked beams subjected to static loads. Precisely, the adoption of the closed-form solution allows a straightforward evolution of an initial random population of chromosomes, representing different damage distributions along the beam axis, towards the fittest and selected as the sought solution. This method allows the identification of the position and intensity of an arbitrary number of cracks and is limited only by the amount of data experimentally measured. The proposed procedure, which has the great advantage of being robust and very fast, has been implemented in the powerful agent based software environment NetLogo, and is here presented and validated with reference to several benchmark cases of single and multi-cracked beams considering different load scenarios and boundary conditions. Sensitivity analyses to assess the influence of instrumental errors are also included in the study.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.11.040
      Issue No: Vol. 64 (2017)
  • Using wavelet sub-band and fuzzy 2-partition entropy to segment chronic
           lymphocytic leukemia images
    • Authors: Thaína A. Azevedo Tosta; Paulo Rogério Faria; Valério Ramos Batista; Leandro Alves Neves; Marcelo Zanchetta do Nascimento
      Pages: 49 - 58
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Thaína A. Azevedo Tosta, Paulo Rogério Faria, Valério Ramos Batista, Leandro Alves Neves, Marcelo Zanchetta do Nascimento
      Histological images analysis is an important procedure to diagnose different types of cancer. One of them is the chronic lymphocytic leukemia (CLL), which can be identified by applying image segmentation techniques. This study presents an unsupervised method to segment neoplastic nuclei in CLL images. Firstly, deconvolution, histogram equalization and mean filter were applied to enhance nuclear regions. Then, a segmentation technique based on a combination of wavelet transform, fuzzy 2-partition entropy and genetic algorithm was used, followed by removal of false positive regions, and application of valley-emphasis and morphological operations. In order to evaluate the proposed algorithm H&E-stained histological images were used. In the accuracy metric, the proposed method attained more than 80%, which can surpass similar methods. This proposal presents spatial distribution that has a good consistency with a manual segmentation and lower overlapping rate than other techniques in the literature.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.11.039
      Issue No: Vol. 64 (2017)
  • A spectral clustering method with semantic interpretation based on
           axiomatic fuzzy set theory
    • Authors: Yuangang Wang; Xiaodong Duan; Xiaodong Liu; Cunrui Wang; Zedong Li
      Pages: 59 - 74
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Yuangang Wang, Xiaodong Duan, Xiaodong Liu, Cunrui Wang, Zedong Li
      Owing to good performance in clustering non-convex datasets, spectral clustering has attracted much attention and become one of the most popular clustering algorithms in the last decades. However, the existing spectral clustering methods are sensitive to parameter settings in building the affinity matrix, which seriously jeopardizes the algorithm's immunity to noise data. Moreover, in many application domains, including credit rating and medical diagnosis, it is very important that the learned model is capable of understandability and interpretability. To make spectral clustering competitive in both classification rate and comprehensibility, we propose a spectral clustering method with semantic interpretation based on axiomatic fuzzy set (AFS) theory, which integrates the representation capability of AFS and the classification competence of spectral clustering (N-cut). The effectiveness of the proposed approach is demonstrated by using real-word datasets, and the experimental results indicate that the performance of our method is comparable with that of classic spectral clustering algorithms (NJW, SM, Diffuzzy, AASC and SOM-SC) and other clustering methods, including K-means, fuzzy c-means, and MinMax K-means. Meanwhile, the proposed method can be used to explore the underlying clusters and give their characteristics in the form of fuzzy descriptions.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.12.004
      Issue No: Vol. 64 (2017)
  • Computational intelligence in optical remote sensing image processing
    • Authors: Yanfei Zhong; Ailong Ma; Yew soon Ong; Zexuan Zhu; Liangpei Zhang
      Pages: 75 - 93
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Yanfei Zhong, Ailong Ma, Yew soon Ong, Zexuan Zhu, Liangpei Zhang
      With the ongoing development of Earth observation techniques, huge amounts of remote sensing images with a high spectral-spatial-temporal resolution are now available, and have been successfully applied in a variety of fields. In the process, they bring about great challenges, such as high-dimensional datasets (the high spatial resolution and hyperspectral features), complex data structures (nonlinear and overlapping distributions), and the nonlinear optimization problem (high computational complexity). Computational intelligence techniques, which are inspired by biological systems, can provide possible solutions to the above-mentioned problems. In this paper, we provide an overview of the application of computational intelligence technologies in optical remote sensing image processing, including: 1) feature representation and selection; 2) classification and clustering; and 3) change detection. Subsequently, the core potentials of computational intelligence for optical remote sensing image processing are delineated and discussed.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.11.045
      Issue No: Vol. 64 (2017)
  • Integrating cluster validity indices based on data envelopment analysis
    • Authors: Boseop Kim; Hakyeon Lee; Pilsung Kang
      Pages: 94 - 108
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Boseop Kim, Hakyeon Lee, Pilsung Kang
      Because clustering is an unsupervised learning task, a number of different validity indices have been proposed to measure the quality of the clustering results. However, there is no single best validity measure for all types of clustering tasks because individual clustering validity indices have both advantages and shortcomings. Because each validity index has demonstrated its effectiveness in particular cases, it is reasonable to expect that a more generalized clustering validity index can be developed, if individually effective cluster validity indices are appropriately integrated. In this paper, we propose a new cluster validity index, named Charnes, Cooper & Rhodes − cluster validity (CCR-CV), by integrating eight internal clustering efficiency measures based on data envelopment analysis (DEA). The proposed CCR-CV can be used for purposes that are more general because it extends the coverage of a single validity index by adaptively adjusting the combining weights of different validity indices for different datasets. Based on the experimental results on 12 artificial and 30 real datasets, the proposed clustering validity index demonstrates superior ability to determine the optimal and plausible cluster structures compared to benchmark individual validity indices.

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.11.052
      Issue No: Vol. 64 (2017)
  • A new Ensemble based multi-agent system for prediction problems: Case
           study of modeling coal free swelling index
    • Authors: Mehdi Golzadeh; Esmaeil Hadavandi; S. Chehreh Chelgani
      Pages: 109 - 125
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Mehdi Golzadeh, Esmaeil Hadavandi, S. Chehreh Chelgani
      In this article, a new ensemble based multi-agent system called “EMAS” is introduced for prediction of problems in data mining. The EMAS is constructed using a four-layer multi-agent system architecture to generate a data mining process based on the coordination of intelligent agents. The EMAS performance is based on data preprocessing and prediction. The first layer is dedicated to clean and normalize data. The second layer is designed for data preprocessing by using intelligent variable ranking to select the most effective agents (select the most important input variables to model an output variable). In the third layer, a negative correlation learning (NCL) algorithm is used to train a neural network ensemble (NNE). Fourth layer is dedicated to do three different subtasks including; knowledge discovery, prediction and data presentation. The ability of the EMAS is evaluated by using a robust coal database (3238 records) for prediction of Free Swelling Index (FSI) as an important problem in coke making industry, and comparing the outcomes with the results of other conventional modeling methods Coal particles have complex structures and EMAS can explore complicated relationships between their structural parameters and select the most important ones for FSI modeling. The results show that the EMAS outperforms all presented modeling methods; therefore, it can be considered as a suitable tool for prediction of problems. Moreover, the results indicated that the EMAS can be further employed as a reliable tool to select important variables, predict complicated problems, model, control, and optimize fuel consumption in iron making plants and other energy facilities.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.12.013
      Issue No: Vol. 64 (2017)
  • Forest road profile optimization using meta-heuristic techniques
    • Authors: Razieh Babapour; Ramin Naghdi; Ismael Ghajar; Zahra Mortazavi
      Pages: 126 - 137
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Razieh Babapour, Ramin Naghdi, Ismael Ghajar, Zahra Mortazavi
      An optimal design of vertical alignment, considering the design constraints and costs is one of the most complicated problems of road planning and construction. The results of many linear, nonlinear and heuristic techniques that can enhance design ability to minimize the total cost of road construction using many different variables are well acknowledged. It is assumed that the genetic algorithm (GA) and Particle Swarm Optimization (PSO) can be efficiently applied for road vertical alignment allocation. This paper focuses on solving vertical alignment optimization problem using meta-heuristic algorithms. Two intelligent optimization tools of GA and PSO have been used to find a near optimal forest road profile, connecting specified endpoints considering restrictions associated with forest road profile design with cost evaluation. A number of setting parameters such as population size and crossing over and mutation rate in GA and also best group and particle's position in PSO were tested to search the global optimal answer. Results of optimization by GA and PSO approaches were compared with the common manual road profile drawing method. Results indicated that the GA and PSO could reduce earth work volume costs while designing more smoother and qualified alignment in comparison with the manual design. Results suggested that among the applied optimization methods, the GA was the most suitable one for this feature of the problem since it is able to save optimum position at better solutions with a reduced computed cost. From the cost point of view, it was cleared that optimizing the fixed length of road profile applying GA, with different population size, would be better for big numbers of control points but smoother for low numbers of control points.

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.12.015
      Issue No: Vol. 64 (2017)
  • Detection of myocardial infarction in 12 lead ECG using support vector
    • Authors: Ashok Kumar Dohare; Vinod Kumar; Ritesh Kumar
      Pages: 138 - 147
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Ashok Kumar Dohare, Vinod Kumar, Ritesh Kumar
      In this paper, we propose myocardial infarction (MI) detection using 12-lead ECG data and analysis of each lead with the help of composite lead. This composite lead is used to detect ECG wave components and clinical wave intervals in all the 12-lead ECG. The four clinical features such as P duration, QRS duration, ST-T complex interval and QT interval are globally determined from average beats of all the 12-lead ECG. Then peak to peak amplitude, area, mean, standard deviation, skewness and kurtosis are determined for P duration, QRS duration and ST-T complex interval of average beats of all the 12-lead ECG. These 220 (4 + 6 × 3 × 12) parameters are used for myocardial infarction detection. The standard 12-lead ECG data of 60 myocardial infarction subjects and 60 healthy controls (HC) cases are obtained from Physikalisch-Technische Bundesanstalt (PTB) database and tested with support vector machine (SVM) classifier. The MI detection sensitivity, specificity and accuracy are 96.66%, 100% and 98.33% respectively. To reduce the computational complexity, feature dimension reduction is important. Therefore, proposed method applies Principal Component Analysis (PCA) reduction technique. In this proposed method, 220 features are reduced to 14 features, using these 14 features, MI detection achieved by SVM classifier is: sensitivity 96.66%, specificity 96.66% and accuracy 96.66%.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.12.001
      Issue No: Vol. 64 (2017)
  • Simulated annealing for a multi-level nurse rostering problem in
           hemodialysis service
    • Authors: Zhenyuan Liu; Zaisheng Liu; Zhipeng Zhu; Yindong Shen; Junwu Dong
      Pages: 148 - 160
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Zhenyuan Liu, Zaisheng Liu, Zhipeng Zhu, Yindong Shen, Junwu Dong
      Hemodialysis service is provided with the cooperation of nurses taking on different roles: in-charged nurse, dispensing nurse, and treatment nurse. The goal of multi-level nurse rostering problem in hemodialysis service (MLHSNRP) is to assign multi-level nurses to satisfy demand of different roles which includes the number of nurses and required levels of qualification. Evaluation criteria consist of satisfying requirements of levels and preferences on shifts and roles of nurses. A 0–1 integer programming model is formulated with a synthetic objective and a simulated annealing (SA) based on a heuristic algorithm is developed. The heuristic algorithm rosters nurses to cover daily demands in order with some heuristic rules. Three neighborhood structures are embedded into SA to improve the solution obtained by the heuristic algorithm. A series of instances are generated based on real cases in Hemodialysis Service Center P in Wuhan, China. Computational experiments are conducted on different combinations of the neighborhood structures and the parameters of SA. The combination of neighborhood structures and parameters with the best optimization effect are found. A comparative evaluation of SA is carried out against a hybrid artificial bee colony (HABC). The results show that SA has a better performance as a whole, SA and HABC respectively have advantages on problems of different scales, and SA runs faster than HABC.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.12.005
      Issue No: Vol. 64 (2017)
  • Volleyball Premier League Algorithm
    • Authors: Reza Moghdani; Khodakaram Salimifard
      Pages: 161 - 185
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Reza Moghdani, Khodakaram Salimifard
      This article proposes a novel metaheuristic algorithm called Volleyball Premier League (VPL) inspired by the competition and interaction among volleyball teams during a season. It also mimics the coaching process during a volleyball match. To solve global optimization problems using the volleyball metaphor, there are terms such as substitution, coaching, and learning, which are captured in the VPL algorithm. The proposed algorithm is benchmarked on 23 well-known test functions, which are categorized into three groups, namely unimodal, multimodal and fixed-dimension multimodal functions. The solutions obtained using the VPL have been compared with other metaheuristic algorithms including Particle Swarm Optimization (PSO), Differential Evolution (DE), Genetic Algorithm (GA), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Harmony Search (HS), Sin Cosine Algorithm (SCA), Soccer League Competition (SLC), and League Championship Algorithm (LCA). In addition, VPL has been used to solve three classical engineering design optimization problems. Results show that VPL algorithm possesses a strong capability to produce superior performance over the other well-known metaheuristic algorithms. The results of the experiments also show that the VPL is effectively applicable to solve problems with complex search space.

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.11.043
      Issue No: Vol. 64 (2017)
  • Adaptive neighborhood selection for many-objective optimization problems
    • Authors: Juan Zou; Yuping Zhang; Shengxiang Yang; Yuan Liu; Jinhua Zheng
      Pages: 186 - 198
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Juan Zou, Yuping Zhang, Shengxiang Yang, Yuan Liu, Jinhua Zheng
      It is generally accepted that conflicts between convergence and distribution deteriorate with an increase in the number of objectives. Furthermore, Pareto dominance loses its effectiveness in many-objectives optimization problems (MaOPs), which have more than three objectives. Therefore, a more valid selection method is needed to balance convergence and distribution. This paper presents a many-objective evolutionary algorithm, called Adaptive Neighborhood Selection for Many-objective evolutionary algorithm(ANS-MOEA), to deal with MaOPs. This method defines the performance of each individual by two types of information, convergence information (CI) and distribution information (DI). In the critical layer, a well-converged individual is selected first from the population, and its neighbors, calculated by DI, are pushed into neighbor collection (NC) soon afterwards. Then, the proper distribution of the population is ensured by competition individuals with large DI go back to the population and individuals with small DI remain in the collection. Four state-of-the-art MaOEAs are selected as the competitive algorithms to validate ANS-MOEA. The experimental results show that ANS-MOEA can solve a MaOP and generate a set of remarkable solutions to balance convergence and distribution.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.11.041
      Issue No: Vol. 64 (2017)
  • Picture fuzzy normalized projection-based VIKOR method for the risk
           evaluation of construction project
    • Authors: Le Wang; Hong-yu Zhang; Jian-qiang Wang; Lin Li
      Pages: 216 - 226
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Le Wang, Hong-yu Zhang, Jian-qiang Wang, Lin Li
      Identifying the priority of risk factors through risk evaluation is a significant decision-making problem for construction project management team. In this study, a multi-criteria decision-making framework is constructed for risk evaluation of construction project with picture fuzzy information. First, several limitations of the extant picture fuzzy projection model are identified. Second, a picture fuzzy normalized projection (PFNP) model is proposed to overcome the limitations of the extant picture fuzzy projection model. Third, an entropy weight method of picture fuzzy sets is established to calculate the objective weight vector of the criteria. Furthermore, an integrated picture fuzzy normalized projection-based VIKOR method is constructed by integrate the PFNP model and VIKOR method under picture fuzzy environment. The proposed method not only considers the difference between two picture fuzzy numbers (PFNs), including distance and included angle, but also considers the compromise among the criteria. Finally, a case study, sensitivity analysis, and comparative analysis are conducted to interpret the practicality and validity of the proposed method.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.12.014
      Issue No: Vol. 64 (2017)
  • Enhancing the performance of differential evolution with covariance matrix
    • Authors: Xiaoyu He; Yuren Zhou
      Pages: 227 - 243
      Abstract: Publication date: March 2018
      Source:Applied Soft Computing, Volume 64
      Author(s): Xiaoyu He, Yuren Zhou
      Differential evolution (DE) is an efficient global optimizer, while the covariance matrix adaptation evolution strategy (CMA-ES) shows great power on local search. However, utilizing both of these advantages in one algorithm is difficult since the randomness introduced by DE may reduce the reliability of covariance matrix estimation. Moreover, the exploration ability of DE can be canceled out by CMA-ES because they use completely different mechanisms to control the search step. To take advantage of both DE and CMA-ES, we propose a novel DE variant with covariance matrix self-adaptation, named DECMSA. In DECMSA, a new mutation scheme named “DE/current-to-better/1” is implemented. This scheme uses a Gaussian distribution to guide the search and strengthens both exploration and exploitation capabilities of DE. The proposed algorithm has been tested on the CEC-13 benchmark suite. The experimental results demonstrate that DECMSA outperforms popular DE variants, and it is quite competitive with state-of-the-art CMA-ES variants such as IPOP-CMA-ES and BIPOP-CMA-ES. Moreover, equipped with a constraint handling method, DECMSA is able to produce better solutions than other comparative algorithms on three classic constrained engineering design problems.
      Graphical abstract image

      PubDate: 2017-12-27T07:39:26Z
      DOI: 10.1016/j.asoc.2017.11.050
      Issue No: Vol. 64 (2017)
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-