for Journals by Title or ISSN
for Articles by Keywords
help
Journal Cover Information Sciences
  [SJR: 2.513]   [H-I: 114]   [173 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0020-0255
   Published by Elsevier Homepage  [3120 journals]
  • Distributed optimization for multi-agent systems with constraints set and
           communication time-delay over a directed graph
    • Authors: Dong Wang; Zhu Wang; Mingfei Chen; Wei Wang
      Pages: 1 - 14
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 438
      Author(s): Dong Wang, Zhu Wang, Mingfei Chen, Wei Wang
      This paper is concerned with the problem of distributed optimization for a multi-agent system with constraints set and communication time-delay over a directed graph. The considered cost function is a summation of all local cost functions associated with each agent. Firstly, a novel distributed algorithm is developed to solve such a problem, where auxiliary state variables are also exchanged to compensate the nonzero gradient of local cost function and accelerate the convergence of estimate states to the optimal point. Secondly, the minimizer of distributed optimization of a multi-agent network is determined by the variational inequality in spite of the existence of time delay. Furthermore, delay-dependent and delay-free sufficient conditions on the convergence of states of agents to the optimal point are derived by constructing a new Lyapunov–Krasovskii functional, respectively. Finally, a numerical example and a comparison are provided to validate the obtained results.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.040
      Issue No: Vol. 438 (2018)
       
  • D-NTRU: More efficient and average-case IND-CPA secure NTRU variant
    • Authors: Baocang Wang; Hao Lei; Yupu Hu
      Pages: 15 - 31
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 438
      Author(s): Baocang Wang, Hao Lei, Yupu Hu
      NTRU is a fast public key cryptosystem remaining unbroken. However, there is no known worst-to-average reduction for the original NTRU cryptosystem. Several provably secure NTRU modifications such as NAEP, pNE and NTRUCCA were proposed in the literature at the cost of inefficiency in encryption/decryption and enlarged ciphertext expansion. NAEP completes the IND-CCA security of the original NTRU under the average-case NTRU one-wayness intractability assumption. Both pNE and NTRUCCA obtain provable security goals under worst-case lattice assumptions. In this paper, a general framework for NTRU is considered, and a new PKC called D-NTRU is proposed. It is shown that the D-NTRU cryptosystem reduces the ciphertext expansion of the NTRU algorithm, and the encryption and decryption algorithms of D-NTRU perform even asymptotically faster than the NTRU algorithm only at the cost of slightly enlarged secret and public keys. The security of D-NTRU is proven in the standard model and under the average-case NTRU one-wayness assumption. The proof of the IND-CPA security of D-NTRU is completed by introducing another NTRU variant called C-NTRU as a bridge, defining some problems, and then proving the equivalence of these problems. So the proposed D-NTRU algorithm is more advantageous than the original NTRU algorithm, and much more efficient than all the provably secure variants of NTRU.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.037
      Issue No: Vol. 438 (2018)
       
  • An Artificial Immune Network for Distributed Demand-Side Management in
           Smart Grids
    • Authors: Diego Lizondo; Sebastian Rodriguez; Adrián Will; Victor Jimenez; Jorge Gotay
      Pages: 32 - 45
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 438
      Author(s): Diego Lizondo, Sebastian Rodriguez, Adrián Will, Victor Jimenez, Jorge Gotay
      In this work we present a Distributed Demand-Side Management system based on the Artificial Immune Network algorithm. It implements an intelligent, distributed and autonomous control of the customer’s Air Conditioning devices in order to meet the desired demand. The system is particularly adapted to tackle the Peak Load problem that appears in Tropical and Subtropical climates due to the use of thousands of these devices at the same time. The design follows the guidelines set by the Smart Grid paradigm, in the sense that it is fault tolerant, distributed and self-controlled. It requires minimal communication infrastructure when compared to a centralized system. The algorithm was evaluated using synthetic and real data. We define Maximal and Average Tolerance as performance metrics, and show that the system keeps the consumption within 1% of the given load limit in all 5 cases.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.039
      Issue No: Vol. 438 (2018)
       
  • Generalized exponential autoregressive models for nonlinear time series:
           Stationarity, estimation and applications
    • Authors: Guang-yong Chen; Min Gan; Guo-long Chen
      Pages: 46 - 57
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 438
      Author(s): Guang-yong Chen, Min Gan, Guo-long Chen
      The generalized exponential autoregressive (GExpAR) models are extensions of the classic exponential autoregressive (ExpAR) model with much more flexibility. In this paper, we first review some development of the ExpAR models, and then discuss the stationary conditions of the GExpAR model. A new estimation algorithm based on the variable projection method is proposed for the GExpAR models. Finally, the models are applied to two real-world time series modeling and prediction. Comparison results show that (i) the proposed estimation approach is much more efficient than the classic method, (ii) the GExpAR models are more powerful in modeling the nonlinear time series.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.029
      Issue No: Vol. 438 (2018)
       
  • Geo-cascading and community-cascading in social networks: Comparative
           analysis and its implications to edge caching
    • Authors: Konglin Zhu; Zhicheng Chen; Lin Zhang; Yuan Zhang; Sang-Wook Kim
      Pages: 1 - 12
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 436–437
      Author(s): Konglin Zhu, Zhicheng Chen, Lin Zhang, Yuan Zhang, Sang-Wook Kim
      The proliferation of social networking service (SNS) enables to share information through social links. Due to the characteristics of information and social properties, the diffused information appears various shapes. It may either be shaped as social communities or geographical regions. In order to investigate different information diffusion behaviors, we study over 54,000 pieces of diffused information in Twitter data set. We propose both macro and micro approaches to distinguish different information diffusion behaviors, either by geo-cascading or community-cascading. We then study the characteristics of information cascading behaviors, which come out with a more sophisticated diffusion network pattern in community-cascading diffusion networks. Furthermore, we conduct the experiments to show the impact of different kinds of information diffusion behaviors in the caching behaviors for mobile edge computing. It turns out that the caching scheme combining both social and geographical factors outperforms the conventional caching schemes.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.012
      Issue No: Vol. 436-437 (2018)
       
  • RECOME: A new density-based clustering algorithm using relative KNN kernel
           density
    • Authors: Yangli-ao Geng; Qingyong Li; Rong Zheng; Fuzhen Zhuang; Ruisi He; Naixue Xiong
      Pages: 13 - 30
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 436–437
      Author(s): Yangli-ao Geng, Qingyong Li, Rong Zheng, Fuzhen Zhuang, Ruisi He, Naixue Xiong
      Discovering clusters from a dataset with different shapes, densities, and scales is a known challenging problem in data clustering. In this paper, we propose the RElative COre MErge (RECOME) clustering algorithm. The core of RECOME is a novel density measure, i.e., Relative K nearest Neighbor Kernel Density (RNKD). RECOME identifies core objects with unit RNKD, and partitions non-core objects into atom clusters by successively following higher-density neighbor relations toward core objects. Core objects and their corresponding atom clusters are then merged through α-reachable paths on a KNN graph. We discover that the number of clusters computed by RECOME is a step function of the α parameter with jump discontinuity on a small collection of values. A fast jump discontinuity discovery (FJDD) method is proposed based on graph theory. RECOME is evaluated on both synthetic datasets and real datasets. Experimental results indicate that RECOME is able to discover clusters with different shapes, densities, and scales. It outperforms six baseline methods on both synthetic datasets and real datasets. Moreover, FJDD is shown to be effective to extract the jump discontinuity set of parameter α for all tested datasets, which can ease the task of data exploration and parameter tuning.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.013
      Issue No: Vol. 436-437 (2018)
       
  • Non-parametric Bayesian annotator combination
    • Authors: M. Servajean; R. Chailan; A. Joly
      Pages: 131 - 145
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 436–437
      Author(s): M. Servajean, R. Chailan, A. Joly
      Relying on a single imperfect human annotator is not recommended in real crowdsourced classification problems. In practice, several annotators’ propositions are generally aggregated to obtain a better classification accuracy. Bayesian approaches, by modeling the relationship between each annotator’s output and the possible true labels (classes), have been shown to outperform other simpler models. Unfortunately, they assume that the total number of true labels is known. This is not the case in lots of realistic scenarios such as open-world classification where the number of possible labels is undetermined and may change over time. In this paper, we show how to set a non-parametric prior over the possible label set using the Dirichlet process in order to overcome this limitation. We illustrate this prior over the Bayesian annotator combination (BAC) model from the state of the art, resulting in the so-called non-parametric BAC (NPBAC). We show how to derive its variational equations to evaluate the model and how to assess it when the Dirichlet process has a prior using the Laplace method. We apply the model to several scenarios related to closed-world classification, open-world classification and novelty detection on a dataset previously published and on two datasets related to plant classification. Our experiments show that NPBAC is able to determine the true number of labels, but also and surprisingly, it largely outperforms the parametric annotator combination by modeling more complex confusions, in particular when few or no training data are available.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.020
      Issue No: Vol. 436-437 (2018)
       
  • Fuzzy-model-based H∞ control for discrete-time switched systems with
           quantized feedback and unreliable links
    • Authors: Jun Cheng; Xiao-Heng Chang; Ju H. Park; Hong Li; Hailing Wang
      Pages: 181 - 196
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 436–437
      Author(s): Jun Cheng, Xiao-Heng Chang, Ju H. Park, Hong Li, Hailing Wang
      This paper addresses the issue of fuzzy-model-based H ∞ control for discrete-time switched systems with quantized feedback and unreliable communication links. It is assumed that packet dropouts and quantized feedback exist simultaneously in the unreliable links of fuzzy switched systems. By constructing a suitable Lyapunov–Krasovskii functional together with an improved Jensen inequality and refined reciprocally convex matrix inequality, a new set of sufficient conditions are established. Finally, the applicability and effectiveness of the derived approaches are illustrated through a numerical example.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.021
      Issue No: Vol. 436-437 (2018)
       
  • Data-independent Random Projections from the feature-map of the
           homogeneous polynomial kernel of degree two
    • Authors: Daniel López-Sánchez; Juan Manuel Corchado; Angélica González Arrieta
      Pages: 214 - 226
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 436–437
      Author(s): Daniel López-Sánchez, Juan Manuel Corchado, Angélica González Arrieta
      This paper presents a novel non-linear extension of the Random Projection method based on the degree-2 homogeneous polynomial kernel. Our algorithm is able to implicitly map data points to the high-dimensional feature space of that kernel and from there perform a Random Projection to an Euclidean space of the desired dimensionality. Pairwise distances between data points in the kernel feature space are approximately preserved in the resulting representation. As opposed to previous kernelized Random Projection versions, our method is data-independent and preserves much of the computational simplicity of the original algorithm. This is achieved by focusing on a specific kernel function, what allowed us to analyze the effect of its associated feature mapping in the distribution of the Random Projection hyperplanes. Finally, we present empirical evidence that the proposed method outperforms alternative approaches in terms of pairwise distance preservation, while being significantly more efficient. Also, we show how our method can be used to approximate the accuracy of non-linear classifiers with efficient linear classifiers in some datasets.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.022
      Issue No: Vol. 436-437 (2018)
       
  • Crowd evacuation simulation approach based on navigation knowledge and
           two-layer control mechanism
    • Authors: Hong Liu; Baoxi Liu; Hao Zhang; Liang Li; Xin Qin; Guijuan Zhang
      Pages: 247 - 267
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 436–437
      Author(s): Hong Liu, Baoxi Liu, Hao Zhang, Liang Li, Xin Qin, Guijuan Zhang
      This paper presents a crowd evacuation simulation approach that is based on navigation knowledge and two-layer control mechanism. In this approach, using the multi-population cultural algorithm framework, the control mechanism of the crowd evacuation simulation is divided into two parts, namely, the belief and population spaces. The population space is divided into groups (sub-populations), and a leader is selected in each group according to a fitness value. The belief space comprises multiple agents and a knowledge base. Each navigation agent corresponds to a group leader. A navigation agent obtains a leader's position through the acceptance function and later passes the information to the knowledge base. On the basis of the position, the obstacles, and the congestion situation provided by the navigation agent, the knowledge base management agent dynamically plans the path and provides the navigation agent the next position along the path. The navigation agent later passes the information to the leader through the affection function. The individuals in the group follow the leader through the social force model in moving to the location provided by the navigation agent. The entire process is repeated until the exit is reached. The path information that successfully reached the exit is recorded, and the knowledge base is updated. This method establishes the relationship between the population and the navigation agent with knowledge and transforms a blindly moving crowd into a guided evacuation as the mass evacuation simulation problem is decomposed into a sub-problem of moving blocks. This approach effectively solves the problem of microscopic models because each individual calculates the path and resolves the slow speed problem. The simulation results illustrate the effectiveness of this method.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.023
      Issue No: Vol. 436-437 (2018)
       
  • An adaptive decision-making method with fuzzy Bayesian reinforcement
           learning for robot soccer
    • Authors: Haobin Shi; Zhiqiang Lin; Shuge Zhang; Xuesi Li; Kao-Shing Hwang
      Pages: 268 - 281
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 436–437
      Author(s): Haobin Shi, Zhiqiang Lin, Shuge Zhang, Xuesi Li, Kao-Shing Hwang
      A robot soccer system is a typical complex time-sequence decision-making system. Problems of uncertain knowledge representation and complex models always exist in robot soccer games. To achieve an adaptive decision-making mechanism, a method with fuzzy Bayesian reinforcement learning (RL) is proposed in this paper. To extract the features utilized in the proposed learning method, a fuzzy comprehensive evaluation method (FCEM) is developed. This method classifies the situations in robot soccer games into a set of features. With the fuzzy analytical hierarchy process (FAHP), the FCEM can calculate the weights according to defined factors for these features, which comprise the dimensionality of the state space. The weight imposed on each feature determines the range of each dimension. Through a Bayesian network, the comprehensively evaluated features are transformed into decision bases. An RL method for strategy selection over time is implemented. The fuzzy mechanism can skillfully adapt experiences to the learning system and provide flexibility in state aggregation, thus improving learning efficiency. The experimental results demonstrate that the proposed method has better knowledge representation and strategy selection than other competing methods.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.032
      Issue No: Vol. 436-437 (2018)
       
  • A two-phase tabu-evolutionary algorithm for the 0–1 multidimensional
           knapsack problem
    • Authors: Xiangjing Lai; Jin-Kao Hao; Fred Glover; Zhipeng Lü
      Pages: 282 - 301
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 436–437
      Author(s): Xiangjing Lai, Jin-Kao Hao, Fred Glover, Zhipeng Lü
      The 0–1 multidimensional knapsack problem is a well-known NP-hard combinatorial optimization problem with numerous applications. In this work, we present an effective two-phase tabu-evolutionary algorithm for solving this computationally challenging problem. The proposed algorithm integrates two solution-based tabu search methods into the evolutionary framework that applies a hyperplane-constrained crossover operator to generate offspring solutions, a dynamic method to determine search zones of interest, and a diversity-based population updating rule to maintain a healthy population. We show the competitiveness of the proposed algorithm by presenting computational results on the 281 benchmark instances commonly used in the literature. In particular, in a computational comparison with the best algorithms in the literature on multiple data sets, we show that our method on average matches more than twice the number of best known solutions to the harder problems than any other method and in addition yields improved best solutions (new lower bounds) for 4 difficult instances. We investigate two key ingredients of the algorithm to understand their impact on the performance of the algorithm.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.026
      Issue No: Vol. 436-437 (2018)
       
  • PTSVRs: Regression models via projection twin support vector machine
    • Authors: Xinjun Peng; De Chen
      Pages: 1 - 14
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Xinjun Peng, De Chen
      Taking motivation from projection twin support vector machine (PTSVM) formulation for recognition, this paper proposes two novel projection twin support vector regression (PTSVR) models, called pair-shifted PTSVR (PPTSVR) and single-shifted PTSVR (SPTSVR), respectively. PTSVRs construct indirectly the target regressor by two functions (hyperplanes) obtained from two smaller-sized quadratic programming problems (QPPs), in which each normal direction makes the within-class variance of the projection of shifted set (or original set) be minimized and the projected center be at a distance of at least 1 from the projection of the other shifted set. As other twin support vector machine (TWSVM) models, the learning speed of PTSVRs is faster than classical support vector regression (SVR) since each of their QPP has only half size. Experimental results on several synthetic as well as benchmark datasets indicate the significant advantage of PPTSVR and SPTSVR in the generalization performance.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.002
      Issue No: Vol. 435 (2018)
       
  • On the exact relationship between the Mutual Information Metric and the
           Success Rate Metric
    • Authors: Hailong Zhang; Yongbin Zhou
      Pages: 15 - 25
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Hailong Zhang, Yongbin Zhou
      In real scenarios, in order to evaluate the physical leakage amount of a crypto device and the key recovery efficiency of a side channel attack, two metrics were proposed. On one hand, the mutual information (MI) metric was proposed to quantify the physical leakage amount of a crypto device. On the other hand, the success rate (SR) metric was proposed to quantify the key recovery efficiency of a side channel attack. Although it is usually assumed that the physical leakage amount of a crypto device influences the key recovery efficiency of an attack a lot, the exact relationship between the MI metric and the SR metric is not clear, and one does not have a quantitative knowledge about the influence of the physical leakage amount of a crypto device on the key recovery efficiency of an attack. In light of this, we analyze and give the exact relationship between the MI metric and the SR metric. Additionally, we do empirical evaluations to verify the soundness of the theoretical analysis. The merit of this work is that one can use the MI metric to understand both the physical leakage amount of a crypto device and the key recovery efficiency of a side channel attack, which can accordingly decrease the evaluation complexity and shorten the evaluation period.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.053
      Issue No: Vol. 435 (2018)
       
  • Bireducts with tolerance relations
    • Authors: M. José Benítez-Caballero; Jesús Medina; Eloísa Ramírez-Poussa; Dominik Ślȩzak
      Pages: 26 - 39
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): M. José Benítez-Caballero, Jesús Medina, Eloísa Ramírez-Poussa, Dominik Ślȩzak
      Reducing the number of attributes by preventing the occurrence of incompatibilities and eliminating existing noise in the original data is an important goal in different frameworks, such as in those focused on modelling and processing incomplete information in information systems. Bireducts were introduced in Rough Set Theory (RST) as one of successful solutions for the problem aimed at achieving a balance between elimination of attributes and characterization of objects that the remaining attributes can still distinguish. This paper considers bireducts in a general framework in which attributes induce tolerance relations over the available objects. In order to compute the new reducts and bireducts a characterization based on a general discernibility function is given.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.037
      Issue No: Vol. 435 (2018)
       
  • A Pearson’s correlation coefficient based decision tree and its
           parallel implementation
    • Authors: Yashuang Mu; Xiaodong Liu; Lidong Wang
      Pages: 40 - 58
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Yashuang Mu, Xiaodong Liu, Lidong Wang
      In this paper, a Pearson’s correlation coefficient based decision tree (PCC-Tree) is established and its parallel implementation is developed in the framework of Map-Reduce (MR-PCC-Tree). The proposed methods employ Pearson’s correlation coefficient as a new measure of feature quality to confirm the optimal splitting attributes and splitting points in the growth of decision trees. Besides, the proposed MR-PCC-Tree adopts Map-Reduce technology to every component during the decision trees learning process for parallel computing, which mainly consists of a parallel Pearson’s correlation coefficient based splitting rule and a parallel splitting data method. The experimental analysis is conducted on a series of UCI benchmark data sets with different scales. In contrast to several traditional decision tree classifiers including BFT, C4.5, LAD, SC and NBT on 17 data sets, the proposed PCC-Tree is no worse than the traditional models as a whole. Furthermore, the experimental results on other 8 data sets show the feasibility of the proposed MR-PCC-Tree and its good parallel performance on reducing computational time for large-scale data classification problems.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.059
      Issue No: Vol. 435 (2018)
       
  • Efficient computation of shortest absent words in complete genomes
    • Authors: Abdulrakeeb M. Al-Ssulami
      Pages: 59 - 68
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Abdulrakeeb M. Al-Ssulami
      Computation of shortest absent words in genomes of organisms is a recent alignment free technique to compare genomes of species and to make functional inferences. This technique offers a good indication to mutations, phylogenetic tree construction, genome reconstruction, drug target identification, and pesticide development. The traditional alignments techniques are computationally expensive and impractical on complete and large genomes. Currently, there are two solutions exist to solve this problem on genome level of large sizes, stochastic-based method is running in time complexity of O ( n ) , where the shortest length of absent words is approximated and entered manually and the other is standalone method costs O ( n log 2 ⌈ log 2 ( n + 1 ) / 2 ⌉ ) -time, where the shortest length is determined automatically by the algorithm itself. In this paper, we present standalone method to compute the shortest absent words in time complexity of O ( n ) . The proposed method produces the complete set of shortest absent words from the two strands of the whole human genome in 3.5 min. Our method scans the genomes twice. In the first round, the upper bound for the shortest length is computed and in the second round the absent words are identified. The method is efficient and easy to use; it requires only the genome as input parameter.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.055
      Issue No: Vol. 435 (2018)
       
  • Illumination compensation for face recognition using adaptive singular
           value decomposition in the wavelet domain
    • Authors: Jing-Wein Wang; Ngoc Tuyen Le; Jiann-Shu Lee; Chou-Chen Wang
      Pages: 69 - 93
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Jing-Wein Wang, Ngoc Tuyen Le, Jiann-Shu Lee, Chou-Chen Wang
      Lighting variations are a challenge in face recognition. To overcome this problem, this paper proposes a novel illumination compensation method called adaptive singular value decomposition in the 2D discrete wavelet domain (ASVDW) to enhance face images. First, an efficient brightness detector based on the blue pixel values of the red green blue (RGB) color channels is used to classify the color face image into dark, normal, or bright before applying the corresponding Gaussian template. The RGB color channels of the face image are then transformed to the 2D discrete wavelet domain. The frequency subband coefficients of the three color channels are automatically adjusted by multiplying the singular value matrices of these frequency subband coefficient matrices with their corresponding compensation weight coefficients. An efficient image denoising model is then applied, and a 2D inverse discrete wavelet transform is applied to obtain the ASVDW-compensated color face images without the lighting effect. In addition, a region-based ASVDW method (RASVDW), which entails the application of the ASVDW algorithm in four regions of an image, is introduced to reduce the computing time. Experimental results validate the efficiency of the proposed methods.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.057
      Issue No: Vol. 435 (2018)
       
  • CF-integrals: A new family of pre-aggregation functions with application
           to fuzzy rule-based classification systems
    • Authors: Giancarlo Lucca; José Antonio Sanz; Graçaliz Pereira Dimuro; Benjamín Bedregal; Humberto Bustince; Radko Mesiar
      Pages: 94 - 110
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Giancarlo Lucca, José Antonio Sanz, Graçaliz Pereira Dimuro, Benjamín Bedregal, Humberto Bustince, Radko Mesiar
      This paper introduces the family of CF -integrals, which are pre-aggregations functions that generalizes the Choquet integral considering a bivariate function F that is left 0-absorbent. We show that CF -integrals are 1 → -pre-aggregation functions, studying in which conditions they are idempotent and/or averaging functions. This characterization is an important issue of our approach, since we apply these functions in the Fuzzy Reasoning Method (FRM) of a fuzzy rule-based classification system and, in the literature, it is possible to observe that non-averaging aggregation functions usually provide better results. We carry out a study with several subfamilies of CF -integrals having averaging or non-averaging characteristics. As expected, the proposed non-averaging CF -integrals obtain more accurate results than the averaging ones, thus, offering new possibilities for aggregating accurately the information in the FRM. Furthermore, it allows us to enhance the results of classical FRMs like the winning rule and the additive combination.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.029
      Issue No: Vol. 435 (2018)
       
  • A new algorithm for reduct computation based on gap elimination and
           attribute contribution
    • Authors: Vladímir Rodríguez-Diez; José Fco. Martínez-Trinidad; Jesús A. Carrasco-Ochoa; Manuel S. Lazo-Cortés
      Pages: 111 - 123
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Vladímir Rodríguez-Diez, José Fco. Martínez-Trinidad, Jesús A. Carrasco-Ochoa, Manuel S. Lazo-Cortés
      Attribute reduction is a key aspect of Rough Set Theory. Finding the complete set of reducts is important for solving problems such as the assessment of attribute relevance, multi–objective cost–sensitive attribute reduction and dynamic reduct computation. The main limitation in the application of Rough Set methods is that finding all reducts of a decision system has exponential complexity regarding the number of attributes. Several algorithms have been reported to reduce the cost of reduct computation. Unfortunately, most of these algorithms relay on high cost operations for candidate evaluation. Therefore, in this paper, we propose a new algorithm for computing all reducts of a decision system, based on the pruning properties of gap elimination and attribute contribution, that uses simpler operations for candidate evaluation in order to reduce the runtime. Finally, the proposed algorithm is evaluated and compared with other state of the art algorithms, over synthetic and real decision systems.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.027
      Issue No: Vol. 435 (2018)
       
  • A disease diagnosis and treatment recommendation system based on big data
           mining and cloud computing
    • Authors: Jianguo Chen; Kenli Li; Huigui Rong; Kashif Bilal; Nan Yang; Keqin Li
      Pages: 124 - 149
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Jianguo Chen, Kenli Li, Huigui Rong, Kashif Bilal, Nan Yang, Keqin Li
      It is crucial to provide compatible treatment schemes for a disease according to various symptoms at different stages. However, most classification methods might be ineffective in accurately classifying a disease that holds the characteristics of multiple treatment stages, various symptoms, and multi-pathogenesis. Moreover, there are limited exchanges and cooperative actions in disease diagnoses and treatments between different departments and hospitals. Thus, when new diseases occur with atypical symptoms, inexperienced doctors might have difficulty in identifying them promptly and accurately. Therefore, to maximize the utilization of the advanced medical technology of developed hospitals and the rich medical knowledge of experienced doctors, a Disease Diagnosis and Treatment Recommendation System (DDTRS) is proposed in this paper. First, to effectively identify disease symptoms more accurately, a Density-Peaked Clustering Analysis (DPCA) algorithm is introduced for disease-symptom clustering. In addition, association analyses on Disease-Diagnosis (D-D) rules and Disease-Treatment (D-T) rules are conducted by the Apriori algorithm separately. The appropriate diagnosis and treatment schemes are recommended for patients and inexperienced doctors, even if they are in a limited therapeutic environment. Moreover, to reach the goals of high performance and low latency response, we implement a parallel solution for DDTRS using the Apache Spark cloud platform. Extensive experimental results demonstrate that the proposed DDTRS realizes disease-symptom clustering effectively and derives disease treatment recommendations intelligently and accurately.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.001
      Issue No: Vol. 435 (2018)
       
  • Fréchet derivative for linearly correlated fuzzy function
    • Authors: Estevão Esmi; Francielle Santo Pedro; Laécio Carvalho de Barros; Weldon Lodwick
      Pages: 150 - 160
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Estevão Esmi, Francielle Santo Pedro, Laécio Carvalho de Barros, Weldon Lodwick
      This article introduces the concept of linearly correlated Fréchet derivative for fuzzy processes. For linearly autocorrelated fuzzy functions, a practical method to calculate the Fréchet derivative is given. Moreover, we have shown that the Fréchet derivative is given by the derivative of appropriate standard functions and this concept is illustrated by several examples. Finally, we show that under certain conditions, the Fréchet derivative coincides with that given by interactive fuzzy arithmetic.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.051
      Issue No: Vol. 435 (2018)
       
  • Lossy compression approach to subspace clustering
    • Authors: Łukasz Struski; Jacek Tabor; Przemysław Spurek
      Pages: 161 - 183
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Łukasz Struski, Jacek Tabor, Przemysław Spurek
      We present a novel subspace clustering algorithm SuMC (Subspace Memory Clustering) based on information theory, MDLP (Minimal Description Length Principle) and lossy compression. SuMC simultaneously solves two fundamental problems of subspace clustering: determination of the number of clusters and their optimal dimensions. Although SuMC requires only two parameters: data compression ratio r and a number of bits that are used to code a single scalar, the optimal value of compression ratio can be estimated by the Bayesian information criterion (BIC). We verified that in typical tasks of clustering, image segmentation and data compression, we obtain either better or comparable results to the leading methods of subspace clustering.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.056
      Issue No: Vol. 435 (2018)
       
  • Weighted fuzzy interpolated reasoning based on ranking values of polygonal
           fuzzy sets and new scale and move transformation techniques
    • Authors: Shyi-Ming Chen; Stenly Ibrahim Adam
      Pages: 184 - 202
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Shyi-Ming Chen, Stenly Ibrahim Adam
      In this paper, we propose a new transformation-based weighted fuzzy interpolative reasoning (FIR) method based on ranking values of polygonal fuzzy sets (PFSs) and the proposed new scale and move transformation techniques. The proposed weighted FIR method gets more reasonable FIR results than the ones of the existing methods, where the weight of each antecedent variable and the weight of each fuzzy rule are generated automatically. Moreover, the proposed new scale and move transformation techniques can deal with FIR using singleton fuzzy sets and PFSs. We also apply the proposed weighted FIR method to predict the diarrheal disease rates in remote villages. The proposed weighted FIR method provides us with a very useful way for weighted FIR in sparse fuzzy rule-based systems.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.054
      Issue No: Vol. 435 (2018)
       
  • Dynamic Multi-objective Estimation of Distribution Algorithm based on
           Domain Adaptation and Nonparametric Estimation
    • Authors: Min Jiang; Liming Qiu; Zhongqiang Huang; Gary G. Yen
      Pages: 203 - 223
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Min Jiang, Liming Qiu, Zhongqiang Huang, Gary G. Yen
      Although dynamic optimization and multi-objective optimization have made considerable progress individually, solving dynamic multi-objective optimization problems remains a monumental challenge since their multiple, conflicting objectives could change over time. In this paper, we propose a Domain Adaptation and Nonparametric Estimation-based Estimation of Distribution Algorithm, called DANE-EDA, to solve dynamic multi-objective optimization problems. Notable features of the proposed algorithm include the importance sampling, nonparametric density estimation, probabilistic prediction, and a domain adaptation technique seamlessly unified under an innovative framework. The design takes full advantage of the powerful Monte-Carlo method and transfer learning technique. This kind of combination will help the proposed algorithm to maintain a delicate exploration-exploitation trade-off from temporal and spatial perspectives. At the same time, it will help the proposed algorithm to overcome the shortcomings caused by transfer learning, specifically, the loss of the diversity. After proving convergence and analyzing the computational complexity of the DANE-EDA, we compare the proposed method with nine EDAs or dynamic multi-objective optimization algorithms on twelve different test instances. The experimental results affirm the effectiveness of the proposed method in addressing dynamic multi-objective optimization problems.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.058
      Issue No: Vol. 435 (2018)
       
  • An efficient histogram-based index mapping mechanism for reversible data
           hiding in VQ-compressed images
    • Authors: Peyman Rahmani; Gholamhossein Dastghaibyfard
      Pages: 224 - 239
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Peyman Rahmani, Gholamhossein Dastghaibyfard
      In this paper, a reversible data hiding scheme for vector quantization (VQ)-compressed images is proposed. The scheme introduces two methods to explore the high correlations between VQ indices of an index table, followed by exploiting the redundancy elimination for data embedding. The first method utilizes the concept of side matching and the second one uses the feature of locally repetitive occurrence of the VQ indices in VQ-compressed images. In this research, some concepts of information theory are studied to design an efficient index mapping mechanism in the embedding capacity point of view. The proposed index mapping mechanism partitions the positions of the sorted codebook into some intervals, and assigns each interval of the positions to each position with high hit rate in the sorted codebook. Then, based on the secret bits, each index located in the positions of the sorted codebook with high hit rates is mapped to one of the indices in the interval assigned to that position. Furthermore, an interval of the sorted codebook is reserved as the indicator of the indices with low hit rates. The experimental results demonstrate that the proposed scheme significantly outperforms the existing schemes in terms of the embedding capacity and compression-embedding efficiency.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.041
      Issue No: Vol. 435 (2018)
       
  • Differential evolution with adaptive trial vector generation strategy and
           cluster-replacement-based feasibility rule for constrained optimization
    • Authors: Bin Xu; Xu Chen; Lili Tao
      Pages: 240 - 262
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Bin Xu, Xu Chen, Lili Tao
      Constrained optimization problems (COPs) are common in many fields. To solve such problems effectively, in this paper, we propose a new constrained optimization evolutionary algorithm (COEA) named CACDE that combines an adaptive trial vector generation strategy-based differential evolution (DE) algorithm with a cluster-replacement-based feasibility rule. In CACDE, some potential mutation strategies, scale factors and crossover rates are stored in candidate pools, and each element in the pools is assigned a selection probability. During the trial vector generation stage, the mutation strategy, scale factor and crossover rate for each target vector are competitively determined based on these selection probabilities. Meanwhile, the selection probabilities are dynamically updated based on statistical information learned from previous searches in generating improved solutions. Moreover, to alleviate the greediness of the feasibility rule, the main population is divided into several clusters, and one vector in each cluster is conditionally replaced with an archived infeasible vector with a low objective value. The superior performance of CACDE is validated via comparisons with some state-of-the-art COEAs over 2 sets of artificial problems and 5 widely used mechanical design problems. The results show that CACDE is an effective approach for solving COPs, basically due to the use of adaptive DE and cluster-replacement-based feasibility rule.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.014
      Issue No: Vol. 435 (2018)
       
  • Nonnegative matrix factorization with mixed hypergraph regularization for
           community detection
    • Authors: Wenhui Wu; Sam Kwong; Yu Zhou; Yuheng Jia; Wei Gao
      Pages: 263 - 281
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Wenhui Wu, Sam Kwong, Yu Zhou, Yuheng Jia, Wei Gao
      Community structure is the most significant attribute of networks, which is often identified to help discover the underlying organization of networks. Currently, nonnegative matrix factorization (NMF) based community detection method makes use of the related topology information and assumes that networks are able to be projected onto a latent low-dimensional space, in which the nodes can be efficiently clustered. In this paper, we propose a novel framework named mixed hypergraph regularized nonnegative matrix factorization (MHGNMF), which takes higher-order information among the nodes into consideration to enhance the clustering performance. The hypergraph regularization term forces the nodes within the identical hyperedge to be projected onto the same latent subspace, so that a more discriminative representation is achieved. In the proposed framework, we generate a set of hyperedges by mixing two kinds of neighbors for each centroid, which makes full use of topological connection information and structural similarity information. By testing on two artificial benchmarks and eight real-world networks, the proposed framework demonstrates better detection results than the other state-of-the-art methods.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.008
      Issue No: Vol. 435 (2018)
       
  • Method for three-way decisions using ideal TOPSIS solutions at Pythagorean
           fuzzy information
    • Authors: Decui Liang; Zeshui Xu; Dun Liu; Yao Wu
      Pages: 282 - 295
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volume 435
      Author(s): Decui Liang, Zeshui Xu, Dun Liu, Yao Wu
      Pythagorean fuzzy sets (PFSs) as a new generalization of intuitionistic fuzzy sets (IFSs) can effectively handle uncertain information more flexibly in the process of decision making. As a natural extension of three-way decisions with decision-theoretic rough sets (DTRSs), this paper proposes a new model of three-way decisions and develops the corresponding decision-making procedure based on Pythagorean fuzzy information systems. With respect to the results reported in most of the existing papers, we consider a general situation that the information system does not have the class label. In this case, we can encounter two challenges and need to reinterpret the loss function and the conditional probability. Considering the properties of PFSs, we firstly introduce the Pythagorean fuzzy number (PFN) into DTRSs, which can provide a new interpretation for the loss function. Then, we construct a new model of Pythagorean fuzzy decision-theoretic rough sets (PFDTRSs) based on the Bayesian decision procedure. With respect to the conditional probability, we effectively utilize the technique for order preference by similarity to ideal solution (TOPSIS) method to estimate it. Furthermore, we design a decision-making procedure of three-way decisions-based ideal solutions in the Pythagorean fuzzy information system. Our proposed method not only takes the decision risk into consideration, but also tells us how to choose the action for each project and gives its corresponding semantic explanation, which can replenish the decision results of TOPSIS. Finally, we expound the application of three-way decisions by an example of the research and development (R&D) project selection and validate our method via the comparison analysis.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2018.01.015
      Issue No: Vol. 435 (2018)
       
  • Learning compact zero-order TSK fuzzy rule-based systems for
           high-dimensional problems using an Apriori + local search approach
    • Authors: Javier Cózar; Luis delaOssa; José A. Gámez
      Pages: 1 - 16
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Javier Cózar, Luis delaOssa, José A. Gámez
      Learning fuzzy rule-based systems entails searching a set of fuzzy rules which fits the training data. Even if using fix fuzzy partitions, the amount of rules that can be formed is exponential in the number of variables. Thus, the search must be carried out by means of metaheuristics such as genetic algorithms, and sometimes restricted to the set of candidate rules which reach a minimum support. In this article, we propose and evaluate two methods to learn zero-order Takagi–Sugeno–Kang fuzzy systems in medium-high dimensional domains. First, we introduce the minimum individual error of a rule in the criterion for candidate selection. Then, due to the intrinsic locality of fuzzy rule-based systems, where each rule mainly interacts with adjacent rules, we study the use of local search algorithms to carry out the search of the final rule base. Results show that the proposed scheme for candidate rule selection leads to an improvement in the results, regardless of the subsequent search algorithm. Moreover, local search-based algorithms achieve competitive results, reducing substantially the number of rules of the learnt systems.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.026
      Issue No: Vol. 433-434 (2018)
       
  • Two-phase linear reconstruction measure-based classification for face
           recognition
    • Authors: Jianping Gou; Yong Xu; David Zhang; Qirong Mao; Lan Du; Yongzhao Zhan
      Pages: 17 - 36
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Jianping Gou, Yong Xu, David Zhang, Qirong Mao, Lan Du, Yongzhao Zhan
      In this article we propose several two-phase representation-based classification (RBC) methods that are inspired by the idea of the two-phase test sample sparse representation (TPTSR) method with L 2-norm. We first introduce two simple extensions of TPTSR using L 1-norm alone and the combination of L 1-norm and L 2-norm, respectively. We then propose two-phase linear reconstruction measure-based classification (TPLRMC) by adopting the linear reconstruction measure (LRM). Decomposing each feature sample as a weighted linear combination of the other feature samples, TPLRMC can measure the similarities between any pairs of feature samples. The linear reconstruction coefficients can capture the feature’s neighborhood structure that is hidden in data. Thus, these coefficients with Lp -norm regularization can be used as good similarity measures between samples and the test ones in classifier design of TPLRMC to enhance discriminative capability. In regard to the classification procedure, TPLRMC first coarsely searches K nearest neighbors for a given query sample with LRM, then finely represents the query sample as a linear combination of the chosen K nearest neighbors, and finally uses LRM to perform classification. The experimental results on six face recognition databases and two object recognition databases demonstrate that the proposed methods outperform the competitors used in the experiments.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.025
      Issue No: Vol. 433-434 (2018)
       
  • Linguistic terms with weakened hedges: A model for qualitative decision
           making under uncertainty
    • Authors: Hai Wang; Zeshui Xu; Xiao-Jun Zeng
      Pages: 37 - 54
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Hai Wang, Zeshui Xu, Xiao-Jun Zeng
      When expressing the experts’ opinions in qualitative decision making (QDM), linguistic hedges can be considered to modify the force expressed by a predefined linguistic term. If an expert is not sure to select one term, weakened hedges would be a natural way to express the uncertainty. This is usually implemented by using a hedge to modify the most possible term, like the expression “more or less good”. To model the uncertainty implied by hedges in QDM, this paper presents a novel linguistic representational and computational model in which the linguistic expressions take the form of a weakened hedge and a linguistic term, which is named as linguistic term with weakened hedge (LTWH). The syntax of LTWHs is defined by a set of hedges and a set of linguistic terms. The semantics of a LTWH is determined, objectively, based on the semantics of the term and a similarity measure of the reference domain. Accordingly, the negation, order relations and some basic operations of LTWHs are defined. To illustrate the effectiveness of LTWHs in granular computing, the connection to some multi-granularity linguistic models is exploited and a process for unifying multi-granularity linguistic information is developed. The major contritions of this paper are: (1) The proposed model enables a new manner to express and operate uncertain linguistic information in QDM; (2) it possesses clear syntax and semantics and the computational results are very interpretable; and (3) the proposed solution of multi-granularity linguistic unification maintains the semantics of the original linguistic information.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.036
      Issue No: Vol. 433-434 (2018)
       
  • Map matching for low-sampling-rate GPS trajectories by exploring real-time
           moving directions
    • Authors: Yu-Ling Hsueh; Ho-Chian Chen
      Pages: 55 - 69
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Yu-Ling Hsueh, Ho-Chian Chen
      Map matching is the process of matching a series of recorded geographic coordinates (e.g., a GPS trajectory) to a road network. Due to GPS positioning errors and the sampling constraints, the GPS data collected by the GPS devices are not precise, and the location of a user cannot always be correctly shown on the map. Therefore, map matching is an important preprocessing step for many applications such as navigation systems, traffic flow analysis, and autonomous cars. Unfortunately, most current map-matching algorithms only consider the distance between the GPS points and the road segments, the topology of the road network, and the speed constraint of the road segment to determine the matching results. Moreover, most current map-matching algorithms cannot handle the matching errors at junctions. In this paper, we propose a spatio-temporal based matching algorithm (STD-matching) for low-sampling-rate GPS trajectories. STD-matching considers (1) the spatial features such as the distance information and topology of the road network, (2) the speed constraints of the road network, and (3) the real-time moving direction which shows the movement of the user. Moreover, we also reduce the running time by performing GPS clustering, GPS smoothing, and the A* shortest path algorithms. In our experiments, we compare STD-matching with three existing algorithms, the ST-matching algorithm, the stMM algorithm, and the HMM-RCM algorithm, using a real data set. The experiment results show that our STD-matching algorithm outperforms the three existing algorithms in terms of matching accuracy.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.031
      Issue No: Vol. 433-434 (2018)
       
  • An artificial immune network to control interrupted flow at a signalized
           intersection
    • Authors: Ali Louati; Saber Darmoul; Sabeur Elkosantini; Lamjed ben Said
      Pages: 70 - 95
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Ali Louati, Saber Darmoul, Sabeur Elkosantini, Lamjed ben Said
      To monitor and control interrupted flow at signalized intersections, several Traffic Signal Control Systems (TSCSs) were developed based on optimization and artificial intelligence techniques. Although learning can provide intelligent ways to deal with disturbances, existing approaches still lack concepts and mechanisms that enable direct representation of knowledge and explicit learning, particularly to capture and reuse previous experiences with disturbances. This article addresses this gap by designing a new TSCS based on innovative concepts and mechanisms borrowed from biological immunity. Immune memory enables the design of a Case-Based Reasoning (CBR) System in which cases provide a direct representation of knowledge about disturbances. Immune network theory enables the design of a Reinforcement Learning (RL) mechanism to interconnect cases, capture explicit knowledge about the outcomes (success and failure) of control decisions and enable decision-making by taking advantage of previous outcomes in reaction to new occurrences of disturbances. We provide a detailed description of new learning algorithms, both to create the case-base and to interconnect cases using RL. The performance of the suggested TSCS is assessed by benchmarking it against two standard control strategies from the literature, namely fixed-time and adaptive control using the Longest Queue First – Maximal Weight Matching (LQF-MWM) algorithm. The suggested TSCS is applied on an intersection simulated using VISSIM, a state-of-the-art traffic simulation software. The results show that the suggested TSCS is able to handle different traffic scenarios with competitive performance, and that it is recommended for extreme situations involving blocked approaches and high traffic flow.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.033
      Issue No: Vol. 433-434 (2018)
       
  • Multi-label classification using a fuzzy rough neighborhood consensus
    • Authors: Sarah Vluymans; Chris Cornelis; Francisco Herrera; Yvan Saeys
      Pages: 96 - 114
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Sarah Vluymans, Chris Cornelis, Francisco Herrera, Yvan Saeys
      A multi-label dataset consists of observations associated with one or more outcomes. The traditional classification task generalizes to the prediction of several class labels simultaneously. In this paper, we propose a new nearest neighbor based multi-label method. The nearest neighbor approach remains an intuitive and effective way to solve classification problems and popular multi-label classifiers adhering to this paradigm include the MLKNN and IBLR methods. To classify an instance, our proposal derives a consensus among the labelsets of the nearest neighbors based on fuzzy rough set theory. This mathematical framework captures data uncertainty and offers a way to extract a labelset from the dataset that summarizes the information contained in the labelsets of the neighbors. In our experimental study, we compare the performance of our method with five other nearest neighbor based multi-label classifiers using five evaluation metrics commonly used in multi-label classification. Based on the results on both synthetic and real-world datasets, we are able to conclude that our method is a strong competitor to nearest neighbor based multi-label classifiers like MLKNN and IBLR.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.034
      Issue No: Vol. 433-434 (2018)
       
  • Structure connectivity and substructure connectivity of k-ary n-cube
           networks
    • Authors: Yali Lv; Jianxi Fan; D. Frank Hsu; Cheng-Kuan Lin
      Pages: 115 - 124
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Yali Lv, Jianxi Fan, D. Frank Hsu, Cheng-Kuan Lin
      The k-ary n-cube is one of the most attractive interconnection networks for parallel and distributed computing system. In this paper, we investigate the fault-tolerant capabilities of k-ary n-cubes with respect to the structure connectivity and substructure connectivity. Let H is a connected graph. The structure connectivity of a graph G, denoted by κ(G; H), is the minimum cardinality of a set of connected subgraphs in G, whose deletion disconnects the graph G and every element in the set is isomorphic to H. The substructure connectivity of a graph G, denoted by κs (G; H), is the minimum cardinality of a set of connected subgraphs in G, whose deletion disconnects the graph G and every element in the set is isomorphic to a connected subgraph of H. We show κ ( Q n k ; H ) and κ s ( Q n k ; H ) for each H ∈ {K 1, K 1, 1, K 1, 2, K 1, 3}.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.11.047
      Issue No: Vol. 433-434 (2018)
       
  • Pixel convolutional neural network for multi-focus image fusion
    • Authors: Han Tang; Bin Xiao; Weisheng Li; Guoyin Wang
      Pages: 125 - 141
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Han Tang, Bin Xiao, Weisheng Li, Guoyin Wang
      This paper proposes a pixel-wise convolutional neural network (p-CNN) that can recognize the focused and defocused pixels in source images from its neighbourhood information for multi-focus image fusion. The proposed p-CNN can be thought of as a learned focus measure (FM) and provides more efficiency than conventional handcrafted FMs. To enable the p-CNN with the strong capability to discriminate focused and defocused pixels, a comprehensive training image set based on a public image database is created. Furthermore, by setting precise labels according to different focus levels and adding various defocus masks, the p-CNN can accurately measure the focus level of each pixel in source images in which the artefacts in the fused image can be efficiently avoided. We also propose a method to implement the p-CNN with a conventional image convolutional neural network (image-wised CNN), which is almost 25 times faster than directly using the p-CNN in multi-focus image fusion. Experimental results demonstrate that the proposed method is competitive with or even outperforms the state-of-the-art methods in terms of both subjective visual perception and objective evaluation metrics.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.043
      Issue No: Vol. 433-434 (2018)
       
  • A best-path-updating information-guided ant colony optimization algorithm
    • Authors: Jiaxu Ning; Qin Zhang; Changsheng Zhang; Bin Zhang
      Pages: 142 - 162
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Jiaxu Ning, Qin Zhang, Changsheng Zhang, Bin Zhang
      The ant colony optimization (ACO) algorithm is a type of classical swarm intelligence algorithm that is especially suitable for combinatorial optimization problems. To further improve the convergence speed without affecting the solution quality, in this paper, a novel strengthened pheromone update mechanism is designed that strengthens the pheromone on the edges, which had never been done before, utilizing dynamic information to perform path optimization. In addition, to enhance the global search capability, a novel pheromone-smoothing mechanism is designed to reinitialize the pheromone matrix when the ACO algorithm's search process approaches a defined stagnation state. The improved algorithm is analyzed and tested on a set of benchmark test cases. The experimental results show that the improved ant colony optimization algorithm performs better than compared algorithms in terms of both the diversity of the solutions obtained and convergence speed.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.047
      Issue No: Vol. 433-434 (2018)
       
  • Distributed multi-target tracking with Y-shaped passive linear array
           sonars for effective ghost track elimination
    • Authors: Qian Zhang; Yifan Xie; Taek Lyul Song
      Pages: 163 - 187
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Qian Zhang, Yifan Xie, Taek Lyul Song
      Y-shaped passive linear array sonar (PLAS) systems are composed of three sensor legs that independently report bearings-only measurements with bearing-ambiguity. Given that many ghost targets are generated due to the bearing-ambiguity, multi-target tracking using a PLAS system is a challenging problem, especially when target miss-detection and clutter are also considered. Centralized methods in most cases can obtain good tracking performances. However, they suffer from heavy communicational burdens and computational loads, as all measurements generated by all sensors are sent to the fusion center (FC). To reduce the communicational and computational burdens, a distributed target tracking method is proposed. In this method, to reduce the numbers of false tracks and ghost tracks, the original bearings-only measurements are temporarily tracked without considering the bearing-ambiguity at each local PLAS using a linear multi-target integrated probabilistic data association (LM-IPDA) tracker, which can handle false track discrimination (FTD). Then, the estimated bearings-only measurements from each local tracker are transmitted to the FC, where multiple targets are tracked using the sequential LM-IPDA while considering the bearing-ambiguity problem. To further reduce the number of false tracks generated by the bearing-ambiguity, a novel measurement-to-track assignment method is proposed for the distributed tracking method. Simulations show that the proposed methods have high tracking accuracies, as well as fewer communicational and computational loads, for multi-target tracking with the Y-shaped PLAS system.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.042
      Issue No: Vol. 433-434 (2018)
       
  • Reversible data-hiding in encrypted images by redundant space transfer
    • Authors: Zi-Long Liu; Chi-Man Pun
      Pages: 188 - 203
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Zi-Long Liu, Chi-Man Pun
      Reversible data-hiding in an encrypted image (RDHEI) embeds additional data into the encrypted image content, in a manner such that the data-hiding operation does not affect the lossless recovery of the encrypted image content. In previous RDHEI methods, the encrypted images contain little redundant space; thus, these approaches may feature a low embedding rate and burden content owners with too many tasks. In contrast, in this paper, we propose a novel RDHEI by redundant space transfer (RST) scheme, which involves transferring redundant space from the original image to the encrypted image. Then, the encrypted image will necessarily contain redundant space. Thus, reversibly embedding data into this encrypted image becomes easy and efficient, and general reversible data-hiding (RDH) algorithms can be used. The proposed scheme has the advantages of a high embedding rate and requires few tasks of the content owner. The experimental results show that the performance of the proposed scheme outperforms other RDHEI algorithms.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.044
      Issue No: Vol. 433-434 (2018)
       
  • Relationship recommender system in a business and employment-oriented
           social network
    • Authors: Pablo Chamoso; Alberto Rivas; Sara Rodríguez; Javier Bajo
      Pages: 204 - 220
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Pablo Chamoso, Alberto Rivas, Sara Rodríguez, Javier Bajo
      In the last ten years, social networks have had a great influence on people’s lifestyles and have changed, above all, the way users communicate and relate. This is why, one of the main lines of research in the field of social networks focuses on finding and analyzing possible connections between users. These developments allow users to expand on their network of contacts without having to search among the total set of users. However, there are many types of social networks which attract users with specific needs, these needs influence on the type of contacts users are looking for. Our article proposes a relationship recommender system for a business and employment-oriented social network. The presented system functions by extracting relevant information from the social network which it then uses to adequately recommend new contacts and job offers to users. The recommender system uses information gathered from job offer descriptions, user profiles and users’ actions. Then, different metrics are applied in order to discover new ties that are likely to convert into relationships.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.050
      Issue No: Vol. 433-434 (2018)
       
  • Positive unlabeled learning for building recommender systems in a
           parliamentary setting
    • Authors: Luis M. de Campos; Juan M. Fernández-Luna; Juan F. Huete; Luis Redondo-Expósito
      Pages: 221 - 232
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Luis M. de Campos, Juan M. Fernández-Luna, Juan F. Huete, Luis Redondo-Expósito
      Our goal is to learn about the political interests and preferences of Members of Parliament (MPs) by mining their parliamentary activity in order to develop a recommendation/filtering system to determine how relevant documents should be distributed among MPs. We propose the use of positive unlabeled learning to tackle this problem since we only have information about relevant documents (the interventions of each MP in debates) but not about irrelevant documents and so it is not possible to use standard binary classifiers which have been trained with positive and negative examples. Additionally, we have also developed a new positive unlabeled learning algorithm that compares favorably with: (a) a baseline approach which assumes that every intervention by any other MP is irrelevant; (b) another well-known positive unlabeled learning method; and (c) an approach based on information retrieval methods that matches documents and legislators’ representations. The experiments have been conducted with data from the regional Spanish Andalusian Parliament.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.046
      Issue No: Vol. 433-434 (2018)
       
  • Material implications in lattice effect algebras
    • Authors: R.A. Borzooei; A. Dvurečenskij; A.H. Sharafi
      Pages: 233 - 240
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): R.A. Borzooei, A. Dvurečenskij, A.H. Sharafi
      In this paper we show that it is not acceptable using the minimal conditions of implication introduced for orthomodular lattices as minimal conditions of a good implication for lattice effect algebras. Hence, we first define the concept of a partial t-norm on bounded lattices. Then we obtain the concept of a pt-implication on bounded involutive lattices by introducing some conditions using partial t-norms. Furthermore, we conclude that the Sasaki arrow which is a very important implication in lattice effect algebras is the best pt-implication on them. Finally, applying the Sasaki arrow, we construct a fuzzy implication on a lattice effect algebra and also we prove that it is a weak pt-implication as well.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.049
      Issue No: Vol. 433-434 (2018)
       
  • Convolutional networks with cross-layer neurons for image recognition
    • Authors: Zeng Yu; Tianrui Li; Guangchun Luo; Hamido Fujita; Ning Yu; Yi Pan
      Pages: 241 - 254
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Zeng Yu, Tianrui Li, Guangchun Luo, Hamido Fujita, Ning Yu, Yi Pan
      Very deep convolutional networks have recently achieved a series of breakthroughs on several challenging tasks such as the ImageNet or COCO competitions. However, it is difficult to train such deep neural networks. In this paper, we present a novel structure called cross-layer neurons architecture, which has the capability to train effective deeper neural networks. It utilizes cross-layer neurons to synthesize the information (features) learned from all the lower-level layers and send them to the higher-level layers through the cross-layer. Based on this novel architecture, we propose a new deep neural model termed Cross-Layer Neurons Networks (CLNN). It is shown that CLNN can relieve the problem of vanishing gradient. It is also shown that CLNN has the capability of improving the convergence rate of classification. Comparative experiments on several benchmark datasets (MNIST, CIFAR-10, CIFAR-100, SVHN and STL-10) clearly demonstrate that our proposed model is suitable for training deeper networks and can effectively improve the performance by utilizing cross-layer neurons.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.045
      Issue No: Vol. 433-434 (2018)
       
  • ISAT: An intelligent Web service selection approach for improving
           reliability via two-phase decisions
    • Authors: Weidong Wang; Zhangqin Huang; Liqiang Wang
      Pages: 255 - 273
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Weidong Wang, Zhangqin Huang, Liqiang Wang
      Due to stochasticity and uncertainty of malicious Web services over the Internet, it becomes difficult to select reliable services while meeting non-functional requirements in service-oriented systems. To avoid the unreliable real-world process of obtaining services, this paper proposes a novel service selection approach via two-phase decisions for enhancing the reliability of service-oriented systems. In the first-phase decision, we define the problem of finding reliable service candidates as a multiple criteria decision making (MCDM) problem. Then, we construct a decision model to address the problem. In the second-phase decision, we define the problem of selecting services based on non-functional requirements as an optimization problem. Finally, we propose a convex hull based approach for solving the optimization problem. Large-scale and real-world experiments are conducted to show the advantages of the proposed approach. The evaluation results confirm that our approach achieves higher success rate and less computation time to guarantee the reliability when compared to the other state-of-the-art approaches.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.048
      Issue No: Vol. 433-434 (2018)
       
  • An exact approach to extend network lifetime in a general class of
           wireless sensor networks
    • Authors: Fabián Castaño; André Rossi; Marc Sevaux; Nubia Velasco
      Pages: 274 - 291
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 433–434
      Author(s): Fabián Castaño, André Rossi, Marc Sevaux, Nubia Velasco
      This paper provides a general framework to model and optimize lifetime maximization problems in wireless sensor networks with sensors having specialized capabilities like the ability to adjust their sensing range, change their directions, etc. In order to identify the set of tasks that a sensor carries out, the concept of role is introduced. These roles include sensor direction, sensing range, communication mode and combinations of these. The purpose is to identify schedules, represented as the allocation of roles to the sensors and a time interval for assuming such roles, while covering targets and transmitting signals to the base station. To do so, a large scale linear programming model is proposed and solved through an exact approach based on column generation, which is complemented with a branch-and-cut procedure used to address the pricing subproblem. The proposed approach is tested on an extensive set of randomly generated instances used to evaluate its performance. Computational results show the potential of the proposed approach for medium-large size instances for which it is possible to compute either the optimal or good quality solutions in short computational times.

      PubDate: 2018-02-05T13:11:14Z
      DOI: 10.1016/j.ins.2017.12.028
      Issue No: Vol. 433-434 (2018)
       
  • A note on “on the conditional distributivity of nullnorms over
           uninorms” [Information Sciences, 317 (2015) 157–169]
    • Authors: Dragan
      Abstract: Publication date: April 2018
      Source:Information Sciences, Volumes 436–437
      Author(s): Dragan Jočić
      In this paper the study of the conditional distributivity of nullnorms over uninorms given in [2] is revised. The result from Theorem 8 from the mentioned paper is corrected and it is now given in the correct form.

      PubDate: 2018-02-05T13:11:14Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.196.215.69
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-