Journal Cover Information Sciences
  [SJR: 2.513]   [H-I: 114]   [165 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0020-0255
   Published by Elsevier Homepage  [3048 journals]
  • Event-triggered non-fragile control for linear systems with actuator
           saturation and disturbances
    • Authors: Dan Liu; Guang-Hong Yang
      Pages: 1 - 11
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Dan Liu, Guang-Hong Yang
      This paper investigates the problem of event-triggered non-fragile dynamic output feedback controller design for linear systems with actuator saturation and disturbances. The controller to be designed is supposed to include additive gain variations. By using Lyapunov stability theory and adding slack matrix variables, new sufficient conditions are derived to design the event-triggered parameters and the controller gains. Compared with the existing non-fragile dynamic output feedback controller design methods, the structural restriction on the Lyapunov matrix is relaxed. The effectiveness of the proposed method is demonstrated by two examples.

      PubDate: 2017-11-17T09:57:05Z
      DOI: 10.1016/j.ins.2017.11.003
      Issue No: Vol. 429 (2017)
       
  • SMOPAT: Mining semantic mobility patterns from trajectories of private
           vehicles
    • Authors: Chengcheng Wan; Yanmin Zhu; Jiadi Yu; Yanyan Shen
      Pages: 12 - 25
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Chengcheng Wan, Yanmin Zhu, Jiadi Yu, Yanyan Shen
      With the increasing use of private vehicles with positioning services, GPS trajectory data of vehicles has become one of the major sources of big data about urban life. Existing studies on mobility pattern mining from trajectories share a common limitation, i.e., they fail to capture the semantics of trajectories. Automatic derivation of semantic information for every trajectory is a challenging task. In this paper, we propose an approach, called SMOPAT (Semantic MObility PATterns), for mining spatial-temporal semantic mobility patterns from trajectories of private vehicles. We design a probabilistic generative model with latent variables to characterize the semantic mobility of vehicles. Based on the model, SMOPAT labels each location in a trajectory with a visit purpose by using a polynomial-time dynamic programming algorithm. It then employs an efficient algorithm to find the most frequent semantic mobility patterns. We evaluate our approach on a large data set of real trajectories of private vehicles spanning a time duration of over ten months with 114 million records in Shanghai, China. The experimental results show that our approach produces meaningful patterns and outperforms the two competing methods in terms of diversity, coherence, and coverage.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.10.043
      Issue No: Vol. 429 (2017)
       
  • Uncertainty theory as a basis for belief reliability
    • Authors: Zhiguo Zeng; Rui Kang; Meilin Wen; Enrico Zio
      Pages: 26 - 36
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Zhiguo Zeng, Rui Kang, Meilin Wen, Enrico Zio
      Belief reliability is a newly developed, model-based reliability metric which considers both what we know (expressed as reliability models) and what we don’t know (expressed as epistemic uncertainty in the reliability models) about the reliability. In this paper, we show that due to the explicit representation of epistemic uncertainty, belief reliability should not be regarded as a probability measure; rather, it should be treated as an uncertain measure in uncertainty theory. A minimal cut set-based method is developed to calculate the belief reliability of coherent systems. A numerical algorithm is, then, presented for belief reliability analysis based on fault tree models. The results of application show that the developed methods require less computations than the structure function-based method of classical reliability theory.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.10.050
      Issue No: Vol. 429 (2017)
       
  • Perceptual multi-channel visual feature fusion for scene categorization
    • Authors: Xiao Sun; Zhenguang Liu; Yuxing Hu; Luming Zhang; Roger Zimmermann
      Pages: 37 - 48
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Xiao Sun, Zhenguang Liu, Yuxing Hu, Luming Zhang, Roger Zimmermann
      Effectively recognizing sceneries from a variety of categories is an indispensable but challenging technique in computer vision and intelligent systems. In this work, we propose a novel image kernel based on human gaze shifting, aiming at discovering the mechanism of humans perceiving visually/semantically salient regions within a scenery. More specifically, we first design a weakly supervised embedding algorithm which projects the local image features (i.e., graphlets in this work) onto the pre-defined semantic space. Thereby, we describe each graphlet by multiple visual features at both low-level and high-level. It is generally acknowledged that humans attend to only a few regions within a scenery. Thus we formulate a sparsity-constrained graphlet ranking algorithm which incorporates visual clues at both the low-level and the high-level. According to human visual perception, these top-ranked graphlets are either visually or semantically salient. We sequentially connect them into a path which mimics human gaze shifting. Lastly, a so-called gaze shifting kernel (GSK) is calculated based on the learned paths from a collection of scene images. And a kernel SVM is employed for calculating the scene categories. Comprehensive experiments on a series of well-known scene image sets shown the competitiveness and robustness of our GSK. We also demonstrated the high consistency of the predicted path with real human gaze shifting path.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.10.051
      Issue No: Vol. 429 (2017)
       
  • An improved artificial bee colony algorithm based on the gravity model
    • Authors: Wan-li Xiang; Xue-lei Meng; Yin-zhen Li; Rui-chun He; Mei-qing An
      Pages: 49 - 71
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Wan-li Xiang, Xue-lei Meng, Yin-zhen Li, Rui-chun He, Mei-qing An
      Artificial bee colony (ABC) algorithm is a relatively new biologically-inspired optimization algorithm. According to its solution search equation, it can be seen that ABC is good at exploration but poor at exploitation. Inspired by the gravity model, an attractive force model is proposed for choosing a better neighbor of a current individual to improve the exploitation ability of ABC. Then we propose a novel solution search equation, in which the chosen neighbor plays an important role in guiding the searching process in the employed bee phase. Next, a random guiding search is introduced in the onlooker bee phase to balance the foregoing exploitation. Subsequently, multiple solution search equations, a scheme of perturbation frequency, and a multiple scouts search strategy in view of opposition-based learning are also incorporated into the proposed algorithm, called ABCG, to further reach a good compromise between the exploitation and the exploration. Finally, ABCG is tested on a great number of benchmark functions. The experimental results show that ABCG is effective for solving the complex benchmark problems and it can be considered as a competitive approach.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.11.007
      Issue No: Vol. 429 (2017)
       
  • Incremental rough set approach for hierarchical multicriteria
           classification
    • Authors: Chuan Luo; Tianrui Li; Hongmei Chen; Hamido Fujita; Zhang Yi
      Pages: 72 - 87
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Chuan Luo, Tianrui Li, Hongmei Chen, Hamido Fujita, Zhang Yi
      Multicriteria classification refers to classify objects evaluated by a set of criteria to preference-ordered decision classes. Dominance-based rough set approach has been successfully introduced to express and reason inconsistencies with a dominance principle in multicriteria classification problems. Hierarchical attribute values exist extensively within many real-world applications, which provide a hierarchical form to organize, view and analyze data from different perspectives for accommodating the preference variability. In this study, we consider an extension of dominance-based rough set approach by applying an incremental learning technique for hierarchical multicriteria classification while attribute values dynamically vary across different levels of granulations. We formalize the dynamic characteristics of knowledge granules with the cut refinement and coarsening through attribute value taxonomies in the hierarchical multicriteria decision systems. In consequence, incremental algorithms for computing dominance-based rough approximations of preference-ordered decision classes are developed by applying the resulted prior-knowledge as the input, and only recomputing those outputs which depend on the changed attribute values. This paper presents the theoretical foundation of the proposed approach. Example analysis and experimental evaluation are also provided for illustration of the feasibility and efficiency.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.11.004
      Issue No: Vol. 429 (2017)
       
  • A Generic Scheme of plaintext-checkable database encryption
    • Authors: Sha Ma; Yi Mu; Willy Susilo
      Pages: 88 - 101
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Sha Ma, Yi Mu, Willy Susilo
      Database encryption is essential for cloud database systems. For a large database, decryption could take a lot of computational time. Therefore, verifying an encryption that contains a correct plaintext without decryption becomes significant for a large database system. Plaintext-checkable encryption (PCE) is a potential tool for such database systems, which is first proposed by Canard et al. in CT-RSA 2012. Although the generic PCE in the random oracle model has been studied intensively, the generic PCE in the standard model and its efficient implementation are still challenging problems. This paper presents the first generic PCE in the standard model using smooth projective hash function (SPHF) and prove its s-priv1-cca security, which is independent of current unlink security. Based on the instantiated SPHF from DDH assumption, we obtain the most efficient PCE in the standard model, without any pairing operation. Finally, we improve two existing generic constructions in the random oracle model so that they are secure under chosen ciphertext attack.

      PubDate: 2017-11-17T09:57:05Z
      DOI: 10.1016/j.ins.2017.11.010
      Issue No: Vol. 429 (2017)
       
  • Eliminating the texture features in visually meaningful cipher images
    • Authors: Yu-Guang Yang; Yong-Ce Zhang; Xiu-Bo Chen; Yi-Hua Zhou; Wei-Min Shi
      Pages: 102 - 119
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Yu-Guang Yang, Yong-Ce Zhang, Xiu-Bo Chen, Yi-Hua Zhou, Wei-Min Shi
      Generally, most image encryption algorithms transform original images into texture-like or noise-like ones. Unfortunately the feature like texture or noise is just a hint of the existence of encryption for attackers. To solve this problem, Bao and Zhou [2] proposed a new image encryption algorithm which can transform original images into visually meaningful ones. However, the resulting cipher images have some texture features which may be a hint of the existence of encryption to some extent. To overcome this flaw, we present a new visually meaningful image encryption algorithm based on discrete wavelet transform and discrete quantum walks. Numerical simulation, security analysis and performance comparison demonstrate that our proposal is superior to Bao et al.’s scheme. It promotes the combination of image encryption and quantum computation.

      PubDate: 2017-11-17T09:57:05Z
      DOI: 10.1016/j.ins.2017.11.009
      Issue No: Vol. 429 (2017)
       
  • Robust compressive sensing of multichannel EEG signals in the presence of
           impulsive noise
    • Authors: Xiuming Zou; Lei Feng; Huaijiang Sun
      Pages: 120 - 129
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Xiuming Zou, Lei Feng, Huaijiang Sun
      Latest studies have shown that combining cosparsity and low-rank property usually results in efficient compressive sensing (CS) recovery approaches for multichannel electroencephalogram (EEG) signals. However, existing methods rarely consider noise or consider only the influence of Gaussian noise generated during transmission. When the measurement is corrupted by impulsive noise, the performance of these CS approaches will degrade. In this study, a new robust CS approach is proposed to accurately recover multichannel EEG signals from noisy measurements in the presence of impulsive noise. We first employ Welsch estimator to depress the influence of impulsive noise in CS reconstruction. We further develop an efficient iterative scheme based on half-quadratic theory and alternating direction method of multipliers to solve the resulting nonconvex optimization problem. Experimental results show that our approach can obtain better reconstruction results than those of existing state-of-the-art CS methods when measurements are corrupted by impulsive noise.

      PubDate: 2017-11-17T09:57:05Z
      DOI: 10.1016/j.ins.2017.11.002
      Issue No: Vol. 429 (2017)
       
  • An Evolutionary Multi-objective Optimization algorithm for the routing of
           droplets in Digital Microfluidic Biochips
    • Authors: Julio Juárez; Carlos A. Brizuela; Israel M. Martínez-Pérez
      Pages: 130 - 146
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Julio Juárez, Carlos A. Brizuela, Israel M. Martínez-Pérez
      Many laboratory scale biochemical processes, including clinical diagnostics are being revolutionized by Digital Microfluidic Biochips (DMFBs). This is owing to their high automation capability, low cost, portability, and efficiency. Central to the efficient operation of these devices is the droplet routing problem that aims to drive a set of droplets, each from its source to its target cells, without violating a given set of fluidic and timing constraints. The efficiency of the routing is measured by the amount of cells used and the arrival time of the latest droplet and both criteria are aimed to be minimized simultaneously. To solve this problem we propose an Evolutionary Multi-objective Optimization algorithm for the Droplet Routing problem (EMO-DR) based on the NSGA-II framework, where the crossover operator is not used. EMO-DR features new mutation operators and a biased random generator of initial solutions. Experimental results show that the proposed approach produces competitive results when compared with those obtained through state-of-the-art methods. The paper also highlights the main challenges that evolutionary approaches need to solve when dealing with this routing problem.

      PubDate: 2017-11-17T09:57:05Z
      DOI: 10.1016/j.ins.2017.11.011
      Issue No: Vol. 429 (2017)
       
  • Rank minimization with applications to image noise removal
    • Authors: Yu-Mei Huang; Hui-Yin Yan; You-Wei Wen; Xue Yang
      Pages: 147 - 163
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Yu-Mei Huang, Hui-Yin Yan, You-Wei Wen, Xue Yang
      Rank minimization problem has a wide range of applications in different areas. However, since this problem is NP-hard and non-convex, the frequently used method is to replace the matrix rank minimization with nuclear norm minimization. Nuclear norm is the convex envelope of the matrix rank and it is more computationally tractable. Matrix completion is a special case of rank minimization problem. In this paper, we consider directly using matrix rank as the regularization term instead of nuclear norm in the cost function for matrix completion problem. The solution is analyzed and obtained by a hard-thresholding operation on the singular values of the observed matrix. Then by exploiting patch-based nonlocal self-similarity scheme, we apply the proposed rank minimization algorithm to remove white Gaussian additive noise in images. Gamma multiplicative noise is also removed in logarithm domain. The experimental results illustrate that the proposed algorithm can remove noises in images more efficiently than nuclear norm can do. And the results are also competitive with those obtained by using the existing state-of-the-art noise removal methods in the literature.

      PubDate: 2017-11-17T09:57:05Z
      DOI: 10.1016/j.ins.2017.10.047
      Issue No: Vol. 429 (2017)
       
  • Locality-regularized linear regression discriminant analysis for feature
           extraction
    • Authors: Pu Huang; Tao Li; Zhenqiu Shu; Guangwei Gao; Geng Yang; Chengshan Qian
      Pages: 164 - 176
      Abstract: Publication date: March 2018
      Source:Information Sciences, Volume 429
      Author(s): Pu Huang, Tao Li, Zhenqiu Shu, Guangwei Gao, Geng Yang, Chengshan Qian
      Locality-regularized linear regression classification (LLRC) is an effective classifier that shows great potential for face recognition. However, the original feature space cannot guarantee the classification efficiency of LLRC. To alleviate this problem, we propose a novel dimensionality reduction method called locality-regularized linear regression discriminant analysis (LLRDA) for feature extraction. The proposed LLRDA is developed according to the decision rule of LLRC and seeks to generate a subspace that is discriminant for LLRC. Specifically, the intra-class and inter-class local reconstruction scatters are first defined to characterize the compactness and separability of samples, respectively. Then, the objective function for LLRDA is derived by maximizing the inter-class local reconstruction scatter and simultaneously minimizing the intra-class local reconstruction scatter. Extensive experimental results on CMU PIE, ORL, FERET, and Yale-B face databases validate the effectiveness of our proposed method.

      PubDate: 2017-11-17T09:57:05Z
      DOI: 10.1016/j.ins.2017.11.001
      Issue No: Vol. 429 (2017)
       
  • A Q-learning-based memetic algorithm for multi-objective dynamic software
           project scheduling
    • Authors: Xiao-Ning Shen; Leandro L. Minku; Naresh Marturi; Yi-Nan Guo; Ying Han
      Pages: 1 - 29
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 428
      Author(s): Xiao-Ning Shen, Leandro L. Minku, Naresh Marturi, Yi-Nan Guo, Ying Han
      Software project scheduling is the problem of allocating employees to tasks in a software project. Due to the large scale of current software projects, many studies have investigated the use of optimization algorithms to find good software project schedules. However, despite the importance of human factors to the success of software projects, existing work has considered only a limited number of human properties when formulating software project scheduling as an optimization problem. Moreover, the changing environments of software companies mean that software project scheduling is a dynamic optimization problem. However, there is a lack of effective dynamic scheduling approaches to solve this problem. This work proposes a more realistic mathematical model for the dynamic software project scheduling problem. This model considers that skill proficiency can improve over time and, different from previous work, it considers that such improvement is affected by the employees’ properties of motivation and learning ability, and by the skill difficulty. It also defines the objective of employees’ satisfaction with the allocation. It is considered together with the objectives of project duration, cost, robustness and stability under a variety of practical constraints. To adapt schedules to the dynamically changing software project environments, a multi-objective two-archive memetic algorithm based on Q-learning (MOTAMAQ) is proposed to solve the problem in a proactive-rescheduling way. Different from previous work, MOTAMAQ learns the most appropriate global and local search methods to be used for different software project environment states by using Q-learning. Extensive experiments on 18 dynamic benchmark instances and 3 instances derived from real-world software projects were performed. A comparison with seven other meta-heuristic algorithms shows that the strategies used by our novel approach are very effective in improving its convergence performance in dynamic environments, while maintaining a good distribution and spread of solutions. The Q-learning-based learning mechanism can choose appropriate search operators for the different scheduling environments. We also show how different trade-offs among the five objectives can provide software managers with a deeper insight into various compromises among many objectives, and enabling them to make informed decisions.

      PubDate: 2017-11-03T02:00:28Z
      DOI: 10.1016/j.ins.2017.10.041
      Issue No: Vol. 428 (2017)
       
  • Independent Bayesian classifier combination based sign language
           recognition using facial expression
    • Authors: Pradeep Kumar; Partha Pratim Roy; Debi Prosad Dogra
      Pages: 30 - 48
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 428
      Author(s): Pradeep Kumar, Partha Pratim Roy, Debi Prosad Dogra
      Automatic Sign Language Recognition (SLR) systems are usually designed by means of recognizing hand and finger gestures. However, facial expressions play an important role to represent the emotional states during sign language communication, has not yet been analyzed to its fullest potential in SLR systems. A SLR system is incomplete without the signer’s facial expressions corresponding to the sign gesture. In this paper, we present a novel multimodal framework for SLR system by incorporating facial expression with sign gesture using two different sensors, namely Leap motion and Kinect. Sign gestures are recorded using Leap motion and simultaneously a Kinect is used to capture the facial data of the signer. We have collected a dataset of 51 dynamic sign word gestures. The recognition is performed using Hidden Markov Model (HMM). Next, we have applied Independent Bayesian Classification Combination (IBCC) approach to combine the decision of different modalities for improving recognition performance. Our analysis shows promising results with recognition rates of 96.05% and 94.27% for single and double hand gestures, respectively. The proposed multimodal framework achieves 1.84% and 2.60% gains as compared to uni-modal framework on single and double hand gestures, respectively.

      PubDate: 2017-11-03T02:00:28Z
      DOI: 10.1016/j.ins.2017.10.046
      Issue No: Vol. 428 (2017)
       
  • Softmax regression based deep sparse autoencoder network for facial
           emotion recognition in human-robot interaction
    • Authors: Luefeng Chen; Mengtian Zhou; Wanjuan Su; Min Wu; Jinhua She; Kaoru Hirota
      Pages: 49 - 61
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 428
      Author(s): Luefeng Chen, Mengtian Zhou, Wanjuan Su, Min Wu, Jinhua She, Kaoru Hirota
      Deep neural network (DNN) has been used as a learning model for modeling the hierarchical architecture of human brain. However, DNN suffers from problems of learning efficiency and computational complexity. To address these problems, deep sparse autoencoder network (DSAN) is used for learning facial features, which considers the sparsity of hidden units for learning high-level structures. Meanwhile, Softmax regression (SR) is used to classify expression feature. In this paper, Softmax regression-based deep sparse autoencoder network (SRDSAN) is proposed to recognize facial emotion in human-robot interaction. It aims to handle large data in the output of deep learning by using SR, moreover, to overcome local extrema and gradient diffusion problems in the training process, the overall network weights are fine-tuned to reach the global optimum, which makes the entire depth of the neural network more robust, thereby enhancing the performance of facial emotion recognition. Results show that the average recognition accuracy of SRDSAN is higher than that of the SR and the convolutional neural network. The preliminarily application experiments are performed in the developing emotional social robot system (ESRS) with two mobile robots, where emotional social robot is able to recognize emotions such as happiness and angry.

      PubDate: 2017-11-03T02:00:28Z
      DOI: 10.1016/j.ins.2017.10.044
      Issue No: Vol. 428 (2017)
       
  • An eigenvector based center selection for fast training scheme of RBFNN
    • Authors: Yanxing Hu; Jane Jia You; James N.K. Liu; Tiantian He
      Pages: 62 - 75
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 428
      Author(s): Yanxing Hu, Jane Jia You, James N.K. Liu, Tiantian He
      The Radial Basis Function Neural Network (RBFNN) model is one of the most popular Feedforward Neural Network architectures. Calculating the proper RBF centers efficiently is one of the key problems in the configuration of an RBFNN model. In previous studies, clustering approaches, especially the k-means clustering, are most frequently employed to obtain the RBF centers. However, these approaches are usually time-consuming, particularly for the data sets with a relatively large scale. Meanwhile, some approaches have been proposed to save the training time by sacrificing the accuracy. This paper introduces an approach to quickly determine the RBF centers for an RBFNN model. An eigenvector based clustering method is employed to calculate the RBF centers in the input feature space. RBF centers for the RBFNN model thus can be determined very quickly by calculating the principal components of the data matrix instead of the iterative calculation process of k-means clustering. After that, the connecting weights of the network can be easily obtained via either pseudo-inverse solution or the gradient descent algorithm. To evaluate the proposed approach, the performance of RBFNNs trained via different training schemes is compared in the experiments. It shows that the proposed method greatly reduces the training time of an RBFNN while allowing the RBFNN to attain a comparable accuracy result.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.08.092
      Issue No: Vol. 428 (2017)
       
  • An extended intuitionistic fuzzy TOPSIS method based on a new distance
           measure with an application to credit risk evaluation
    • Authors: Feng Shen; Xinsong Ma; Zhiyong Li; Zeshui Xu; Dongliang Cai
      Pages: 105 - 119
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 428
      Author(s): Feng Shen, Xinsong Ma, Zhiyong Li, Zeshui Xu, Dongliang Cai
      In the process of multi-criteria decision making (MCDM), decision makers or experts usually exploit quantitative or qualitative methods to evaluate the comprehensive performance of all alternatives on each criterion. How the decision-makers or the experts make the evaluations relies on their professional knowledge and the actual performances on the criteria characters of the alternatives. However, because of both the objective complexity of decision making problem and the uncertainty of human subjective judgments, it is sometimes too hard to get the accurate evaluation information. Intuitionistic fuzzy set (IFS) is a useful tool to deal with uncertainty and fuzziness of complex problems. In this paper, we propose a new distance measure between IFSs and prove some of its useful properties. The experimental results show that the proposed distance measure between IFSs can overcome the drawbacks of some existing distance and similarity measures. Then based on the proposed distance measure, an extended intuitionistic fuzzy TOPSIS approach is developed to handle the MCDM problems. Finally, a practical application which is about credit risk evaluation of potential strategic partners is provided to demonstrate the extended intuitionistic fuzzy TOPSIS approach, and then it is compared with other current methods to further explain its effectiveness.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.10.045
      Issue No: Vol. 428 (2017)
       
  • Concept drift in e-mail datasets: An empirical study with practical
           implications
    • Authors: David Ruano-Ordás; Florentino Fdez-Riverola; José R. Méndez
      Pages: 120 - 135
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 428
      Author(s): David Ruano-Ordás, Florentino Fdez-Riverola, José R. Méndez
      Internet e-mail service emerged in the late seventies to implement fast message exchanging through computer networks. Network users immediately discovered the value of this service (sometimes for improper purposes such as spamming). As e-mail became indispensable to increase personal productivity, the volume of spam deliveries was constantly growing. With the passage of time, a great number of proposals and tools have emerged to fight against spam. However, the vast majority of them do not properly take into consideration the inner attributes of spam and ham messages such as the noise or the presence of concept drift. In this work, we provide a detailed empirical study of concept drift in the e-mail domain taking into consideration two key aspects: existing types of concept drift and the real class of messages (spam and ham). As a result, our study reveals different weaknesses of multiple e-mail filtering alternatives and other relevant works in this domain and identifies new strategies to develop more accurate filters. Finally, the experimentation carried out in this work has motivated the development of a concept drift analyser tool for the e-mail domain that can be freely downloaded from https://github.com/sing-group/conceptDriftAnalyser.git.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.10.049
      Issue No: Vol. 428 (2017)
       
  • A self-adaptive class-imbalance TSK neural network with applications to
           semiconductor defects detection
    • Authors: Shing Chiang Tan; Shuming Wang; Junzo Watada
      Pages: 1 - 17
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 427
      Author(s): Shing Chiang Tan, Shuming Wang, Junzo Watada
      This paper develops a hybrid approach integrating an adaptive artificial neural network (ANN) and a fuzzy logic system for tackling class-imbalance problems. In particular, a supervised learning ANN based on Adaptive Resonance Theory (ART) is combined with a Tagaki–Sugeno–Kang-based fuzzy inference mechanism to learn and detect defects of a real large highly imbalanced dataset collected from a semiconductor company. A benchmark study is also conducted to compare the classification performance of the proposed method with other published methods in the literature. The real dataset collected from the semiconductor company intrinsically demonstrates class overlap and data shift in a highly imbalanced data environment. The generalization ability of the proposed method in detecting semiconductor defects is evaluated and compared with other existing methods, and the results are analyzed using statistical methods. The outcomes from the empirical studies positively indicate high potentials of the proposed approach in classifying the highly imbalanced dataset posing overlap class and data shift.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.10.040
      Issue No: Vol. 427 (2017)
       
  • Adaptive synchronization of complicated chaotic systems with uncertainties
           via fuzzy modeling-based control strategy
    • Authors: Lap-Mou Tam; Hsien-Keng Chen; Shih-Yu Li
      Pages: 18 - 31
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 427
      Author(s): Lap-Mou Tam, Hsien-Keng Chen, Shih-Yu Li
      In this paper, adaptive control of complicated chaotic systems with unknown parameters is discussed via a set of fuzzy modeling-based adaptive strategy. The proposed fuzzy model theory aims to adjust the inner-weighting of each linear sub-equation, simplize the complicated modeling process, and to reveal the similar behaviors of complicated nonlinear dynamic system. Further, based on the modeling concept, a set of fuzzy model-based adaptive control scheme and its creative fuzzy update laws of parameters are proposed to achieve the goal of adaptive synchronization. Two identical complicated dynamic systems, Mathieu-Van der pol system (M-V system) with uncertainties, are designed and illustrated for numerical simulation example to show the effectiveness and feasibility of the proposed novel adaptive control strategy.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.10.028
      Issue No: Vol. 427 (2017)
       
  • A competitive mechanism based multi-objective particle swarm optimizer
           with fast convergence
    • Authors: Xingyi Zhang; Xiutao Zheng; Ran Cheng; Jianfeng Qiu; Yaochu Jin
      Pages: 63 - 76
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 427
      Author(s): Xingyi Zhang, Xiutao Zheng, Ran Cheng, Jianfeng Qiu, Yaochu Jin
      In the past two decades, multi-objective optimization has attracted increasing interests in the evolutionary computation community, and a variety of multi-objective optimization algorithms have been proposed on the basis of different population based meta-heuristics, where the family of multi-objective particle swarm optimization is among the most representative ones. While the performance of most existing multi-objective particle swarm optimization algorithms largely depends on the global or personal best particles stored in an external archive, in this paper, we propose a competitive mechanism based multi-objective particle swarm optimizer, where the particles are updated on the basis of the pairwise competitions performed in the current swarm at each generation. The performance of the proposed competitive multi-objective particle swarm optimizer is verified by benchmark comparisons with several state-of-the-art multi-objective optimizers, including three multi-objective particle swarm optimization algorithms and three multi-objective evolutionary algorithms. Experimental results demonstrate the promising performance of the proposed algorithm in terms of both optimization quality and convergence speed.

      PubDate: 2017-11-10T02:12:35Z
      DOI: 10.1016/j.ins.2017.10.037
      Issue No: Vol. 427 (2017)
       
  • Selective bit embedding scheme for robust blind color image watermarking
    • Authors: Thien Huynh-The; Cam-Hao Hua; Nguyen Anh Tu; Taeho Hur; Jaehun Bang; Dohyeong Kim; Muhammad Bilal Amin; Byeong Ho Kang; Hyonwoo Seung; Sungyoung Lee
      Pages: 1 - 18
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): Thien Huynh-The, Cam-Hao Hua, Nguyen Anh Tu, Taeho Hur, Jaehun Bang, Dohyeong Kim, Muhammad Bilal Amin, Byeong Ho Kang, Hyonwoo Seung, Sungyoung Lee
      In this paper, we propose a novel robust blind color image watermarking method, namely SMLE, that allows to embed a gray-scale image as watermark into a host color image in the wavelet domain. After decomposing the gray-scale watermark to component binary images in digits ordering from least significant bit (LSB) to most significant bit (MSB), the retrieved binary bits are then embedded into wavelet blocks of two optimal color channels by using an efficient quantization technique, where the wavelet coefficient difference in each block is quantized to either two pre-defined thresholds for corresponding 0-bits and 1-bits. To optimize the watermark imperceptibility, we equally split the coefficient modified quantity on two middle-frequency sub-bands instead of only one as in existing approaches. The improvement of embedding rule increases approximately 3 dB of watermarked image quality. An adequate trade-off between robustness and imperceptibility is controlled by a factor representing the embedding strength. As for extraction process, we exploit 2D Otsu algorithm for higher accuracy of watermark detection than that of 1D Otsu. Experimental results prove the robustness of our SMLE watermarking model against common image processing operations along with its efficient retention of the imperceptibility of the watermark in the host image. Compared to state-of-the-art methods, our approach outperforms in most of robustness tests at a same high payload capacity.

      PubDate: 2017-10-26T06:26:07Z
      DOI: 10.1016/j.ins.2017.10.016
      Issue No: Vol. 426 (2017)
       
  • Extracting easily interpreted diagnostic rules
    • Authors: Sebastian Porebski; Ewa Straszecka
      Pages: 19 - 37
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): Sebastian Porebski, Ewa Straszecka
      Diagnosis support systems are often disregarded because of their high costs, complicated inference and inability to modify the knowledge base. The aim of this work is to propose a method that helps to resolve these problems by extracting diagnostic rules that can be easily interpreted and verified by experts. The rules can be obtained from data, even if the latter are imperfect, which is usual in medical databases. Next, intuitively clear reasoning is suggested to elaborate on the diagnosis. Rules are focal elements in the framework of the Dempster–Shafer theory. They include fuzzy sets in their premises. Thus, a measure of imprecision as a fuzzy membership function and a measure of uncertainty as the basic probability value are used. Moreover, a rule selection algorithm and a rule evaluation method that prevent some of the imperfections of the existing methods are proposed. Particular attention is paid to the evaluation of the extracted rule set according to its reliability and clarity for a human user. Experimental results obtained for popular medical data sets demonstrate the advantages of the proposed approach. For each data set, simple and readable rule sets are determined. They provide comparable or better results than the approaches published so far.

      PubDate: 2017-10-26T06:26:07Z
      DOI: 10.1016/j.ins.2017.10.034
      Issue No: Vol. 426 (2017)
       
  • A Constraint Fuzzy Interval Analysis approach to fuzzy optimization
    • Authors: Weldon A. Lodwick; K. David Jamison
      Pages: 38 - 49
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): Weldon A. Lodwick, K. David Jamison
      This article illustrates the efficacy of using constraint fuzzy analysis in fuzzy optimization. We present the theoretical approach and concretize this theory via examples.

      PubDate: 2017-10-26T06:26:07Z
      DOI: 10.1016/j.ins.2017.10.026
      Issue No: Vol. 426 (2017)
       
  • Neural networks-based command filtering control of nonlinear systems with
           uncertain disturbance
    • Authors: Jinpeng Yu; Bing Chen; Haisheng Yu; Chong Lin; Lin Zhao
      Pages: 50 - 60
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): Jinpeng Yu, Bing Chen, Haisheng Yu, Chong Lin, Lin Zhao
      This paper is concerned with neural networks-approximation based command filtering backstepping control for uncertain strict-feedback nonlinear systems with unknown disturbances. The “explosion of complexity” problem arising from the virtual controllers’ derivatives is resolved by utilizing the command filtering technique, and the shortcoming existing in dynamic surface method is properly overcome via an introduced error compensation mechanism (ECM). Moreover, the nonlinear functions of the underlying system are well approximated by exploiting neural networks-based framework. The developed strategy may cover two features with comparison of current achievements: 1) The filtering error can be eliminated in the light of the designed compensating signals; 2) The requirement of adaptive parameters is reduced to only one, which may enhance the control performance for realistic project implementation. At last, an application example in position tracking control of surface permanent magnet synchronous motor (SPMSM) is carried out to further verify the effectiveness and advantages of the theoretical result.

      PubDate: 2017-10-26T06:26:07Z
      DOI: 10.1016/j.ins.2017.10.027
      Issue No: Vol. 426 (2017)
       
  • Self-Triggered and Event-Triggered Set-Valued Observers
    • Authors: Daniel Silvestre; Paulo Rosa; João P. Hespanha; Carlos Silvestre
      Pages: 61 - 86
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): Daniel Silvestre, Paulo Rosa, João P. Hespanha, Carlos Silvestre
      This paper addresses the problem of reducing the required network load and computational power for the implementation of Set-Valued Observers (SVOs) in Networked Control System (NCS). Event- and self-triggered strategies for NCS, modeled as discrete-time Linear Parameter-Varying (LPV) systems, are studied by showing how the triggering condition can be selected. The methodology provided can be applied to determine when it is required to perform a full (“classical”) computation of the SVOs, while providing low-complexity state overbounds for the remaining time, at the expenses of temporarily reducing the estimation accuracy. As part of the procedure, an algorithm is provided to compute a suitable centrally symmetric polytope that allows to find hyper-parallelepiped and ellipsoidal overbounds to the exact set-valued state estimates calculated by the SVOs. By construction, the proposed triggering techniques do not influence the convergence of the SVOs, as at some subsequent time instants, set-valued estimates are computed using the conventional SVOs. Results are provided for the triggering frequency of the self-triggered strategy and two interesting cases: distributed systems when the dynamics of all nodes are equal up to a reordering of the matrix; and when the probability distribution of the parameters influencing the dynamics is known. The performance of the proposed algorithm is demonstrated in simulation by using a time-sensitive example.

      PubDate: 2017-11-03T02:00:28Z
      DOI: 10.1016/j.ins.2017.10.029
      Issue No: Vol. 426 (2017)
       
  • On the effectiveness of immune inspired mutation operators in some
           discrete optimization problems
    • Authors: Xiaoyun Xia; Yuren Zhou
      Pages: 87 - 100
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): Xiaoyun Xia, Yuren Zhou
      Artificial immune systems have been widely applied to a variety of complex real-world problems. However, theoretical studies on artificial immune system are still limited and there is a strong need for building a rigorous theoretical foundation to better understand these heuristics. This paper contributes to a theoretical runtime analysis of immune inspired hypermutations on some discrete optimization problems. In particular, we are interested in the performance comparison among somatic contiguous hypermutations (CHM), standard bit mutations (SBM) and local mutation. We reveal that the immune inspired hypermutations can significantly outperform the standard bit mutation most often used in evolutionary algorithms on some well-known pseudo-Boolean functions including Trap and Hierarchical-if-and-only-if functions and instances of two combinatorial optimization problems, namely the Max-Cut problem and the Minimum s-t-cut problem. The proofs give some insights into the relationships between the problem characteristics and algorithmic features. The results of the analysis help strengthen the usefulness of Artificial immune systems.

      PubDate: 2017-10-26T06:26:07Z
      DOI: 10.1016/j.ins.2017.10.038
      Issue No: Vol. 426 (2017)
       
  • An order-based algorithm for minimum dominating set with application in
           graph mining
    • Authors: David Chalupa
      Pages: 101 - 116
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): David Chalupa
      Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated.

      PubDate: 2017-10-26T06:26:07Z
      DOI: 10.1016/j.ins.2017.10.033
      Issue No: Vol. 426 (2017)
       
  • Reduction of attributes in averaged similarities
    • Authors: D. Boixader; J. Recasens
      Pages: 117 - 130
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): D. Boixader, J. Recasens
      Similarity Relations may be constructed from a set of fuzzy attributes. Each fuzzy attribute generates a simple similarity, and these simple similarities are combined into a complex similarity afterwards. The Representation Theorem establishes one such way of combining similarities, while averaging them is a different and more realistic approach in applied domains. In this paper, given an averaged similarity by a family of attributes, we propose a method to find families of new attributes having fewer elements that generate the same similarity. More generally, the paper studies the structure of this important class of fuzzy relations.

      PubDate: 2017-11-03T02:00:28Z
      DOI: 10.1016/j.ins.2017.10.023
      Issue No: Vol. 426 (2017)
       
  • Understanding influence power of opinion leaders in e-commerce networks:
           An opinion dynamics theory perspective
    • Authors: Yiyi Zhao; Gang Kou; Yi Peng; Yang Chen
      Pages: 131 - 147
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): Yiyi Zhao, Gang Kou, Yi Peng, Yang Chen
      In this paper, from the perspective of opinion dynamics theory, we investigate the interaction mechanism of a group of autonomous agents in an e-commerce community (or social network), and the influence power of opinion leaders during the formation of group opinion. According to the opinion's update manner and influence, this paper divides social agents within a social network into two subgroups: opinion leaders and opinion followers. Then, we establish a new bounded confidence-based dynamic model for opinion leaders and followers to simulate the opinion evolution of the group of agents. Through numerical simulations, we further investigate the evolution mechanism of group opinion, and the relationship between the influence power of opinion leaders and three factors: the proportion of the opinion leader subgroups, the confidence levels of opinion followers, and the degrees of trust toward opinion leaders. The simulation results show that, in order to maximize the influence power in e-commerce, enhancing opinion leaders’ credibility is crucial.

      PubDate: 2017-11-03T02:00:28Z
      DOI: 10.1016/j.ins.2017.10.031
      Issue No: Vol. 426 (2017)
       
  • Measure of invulnerability for command and control network based on
           mission link
    • Authors: Wang Yunming; Chen Si; Pan Chengsheng; Chen Bo
      Pages: 148 - 159
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): Wang Yunming, Chen Si, Pan Chengsheng, Chen Bo
      In Command and Control (C2) network, the measure of invulnerability mainly focuses on structural characteristics of the network, where the operational mission has not been adequately considered. As a result, it becomes difficult to assess the invulnerability of C2 network in a dynamical manner. In this paper, the operational entities and heterogeneous relationships among combat entities are analyzed, where the operational C2 network model is constructed based on the combat theory of OODA and the super network. Subsequently, the mission link is defined, which can be used to characterize the combat network. Finally, a new measure of invulnerability for C2 networks is proposed based on the efficiency and entropy of the mission link. In particular, this measure can desirably represent the efficiency of information transmission and robustness of network structures, respectively. The simulation results have demonstrated that the proposed invulnerability measure is highly sensitive and accurate. More specifically, the proposed measure could more accurately reveal the invulnerability of C2 network, where theoretical basis for designing and optimizing the structure of C2 networks can be also provided.

      PubDate: 2017-11-03T02:00:28Z
      DOI: 10.1016/j.ins.2017.10.035
      Issue No: Vol. 426 (2017)
       
  • Big Data and forensics: An innovative approach for a predictable
           jurisprudence
    • Authors: Massimiliano Giacalone; Carlo Cusatelli; Angelo Romano; Antonio Buondonno; Vito Santarcangelo
      Pages: 160 - 170
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 426
      Author(s): Massimiliano Giacalone, Carlo Cusatelli, Angelo Romano, Antonio Buondonno, Vito Santarcangelo
      Nowadays, it is easy to trace a large amount of information on the web, to access documents and produce a digital storage. The current work is submitted as an introduction to an innovative system for the investigation about notoriety of web data which is based on the evaluation of judicial sentences and it is implemented to reduce the duration of all processes. This research also aims to open some new conjoint debates about the study and application of statistical and computational methods to web data on new forensics topics: text mining techniques enable us to obtain information which may be helpful to establish a statistical index in order to describe the quality and the efficiency in terms of law. It is also possible to develop an intelligent system about facts and judgments.

      PubDate: 2017-11-03T02:00:28Z
      DOI: 10.1016/j.ins.2017.10.036
      Issue No: Vol. 426 (2017)
       
  • A hybrid binary particle swarm optimization for the obnoxious p-median
           problem
    • Authors: Geng Lin; Jian Guan
      Pages: 1 - 17
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Geng Lin, Jian Guan
      The obnoxious p-median problem can be formulated as a constrained binary linear program. It is NP-hard, and has a lot of real world applications. In this paper, a hybrid binary particle swarm optimization is proposed to solve the obnoxious p-median problem. A new position updating rule is presented to inherit the good structure of previous high quality solutions. Furthermore, two tabu based mutation operators are used to avoid the premature convergence and guide the search to a promising area. A greedy repair procedure is developed to repair infeasible solutions. In addition, an iterated greedy local search procedure is utilized to enhance the exploitation ability. Extensive experiments are done on a set of 72 benchmark instances from the literature. Experimental results and comparisons with some existing algorithms demonstrate the effectiveness of the proposed algorithm. In particular, the proposed algorithm finds new best solutions for 15 instances. Compared with existing algorithms, the proposed algorithm is able to find better average objective function value in a short average computing time.

      PubDate: 2017-10-18T02:06:14Z
      DOI: 10.1016/j.ins.2017.10.020
      Issue No: Vol. 425 (2017)
       
  • Exploiting reject option in classification for social discrimination
           control
    • Authors: Faisal Kamiran; Sameen Mansha; Asim Karim; Xiangliang Zhang
      Pages: 18 - 33
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Faisal Kamiran, Sameen Mansha, Asim Karim, Xiangliang Zhang
      Social discrimination is said to occur when an unfavorable decision for an individual is influenced by her membership to certain protected groups such as females and minority ethnic groups. Such discriminatory decisions often exist in historical data. Despite recent works in discrimination-aware data mining, there remains the need for robust, yet easily usable, methods for discrimination control. In this paper, we utilize reject option in classification, a general decision theoretic framework for handling instances whose labels are uncertain, for modeling and controlling discriminatory decisions. Specifically, this framework permits a formal treatment of the intuition that instances close to the decision boundary are more likely to be discriminated in a dataset. Based on this framework, we present three different solutions for discrimination-aware classification. The first solution invokes probabilistic rejection in single or multiple probabilistic classifiers while the second solution relies upon ensemble rejection in classifier ensembles. The third solution integrates one of the first two solutions with situation testing which is a procedure commonly used in the court of law. All solutions are easy to use and provide strong justifications for the decisions. We evaluate our solutions extensively on four real-world datasets and compare their performances with previously proposed discrimination-aware classifiers. The results demonstrate the superiority of our solutions in terms of both performance and flexibility of applicability. In particular, our solutions are effective at removing illegal discrimination from the predictions.

      PubDate: 2017-10-18T02:06:14Z
      DOI: 10.1016/j.ins.2017.09.064
      Issue No: Vol. 425 (2017)
       
  • Fuzzy alternating automata over distributive lattices
    • Authors: Xiujuan Wei; Yongming Li
      Pages: 34 - 47
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Xiujuan Wei, Yongming Li
      Nondeterminism gives computation models the power of existential choice. As a generalization of nondeterminism, “alternation” gives computation models the power of existential choice and universal choice simultaneously. In this paper, we extend fuzzy nondeterministic automata to a model called fuzzy alternating automata over distributive lattices. Compared with the previous work, a weight labels a leaf node of the run tree rather than be involved in the edge between states when executing a transition. One advantage of our setting is that it is easy to complement a given fuzzy alternating automaton. It suffices to take the dual operation on the transition function and negate final costs on states. Moreover, we show that fuzzy alternating automata have the same expressive power as fuzzy nondeterministic automata, and the former ones are exponentially more succinct than the latter ones. In addition, we illustrate that such exponential blow-up is unavoidable.

      PubDate: 2017-10-18T02:06:14Z
      DOI: 10.1016/j.ins.2017.10.015
      Issue No: Vol. 425 (2017)
       
  • Multi-objective genetic algorithms in the study of the genetic
           code’s adaptability
    • Authors: Lariza Laura de Oliveira; Alex A. Freitas; Renato Tinós
      Pages: 48 - 61
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Lariza Laura de Oliveira, Alex A. Freitas, Renato Tinós
      Using a robustness measure based on values of the polar requirement of amino acids, Freeland and Hurst (1998) showed that less than one in one million random hypothetical codes are better than the standard genetic code. In this paper, instead of comparing the standard code with randomly generated codes, we use an optimisation algorithm to find the best hypothetical codes. This approach has been used before, but considering only one objective to be optimised. The robustness measure based on the polar requirement is considered the most effective objective to be optimised by the algorithm. We propose here that the polar requirement is not the only property to be considered when computing the robustness of the genetic code. We include the hydropathy index and molecular volume in the evaluation of the amino acids using three multi-objective approaches: the weighted formula, lexicographic and Pareto approaches. To our knowledge, this is the first work proposing multi-objective optimisation approaches with a non-restrictive encoding for studying the evolution of the genetic code. Our results indicate that multi-objective approaches considering the three amino acid properties obtain better results than those obtained by single objective approaches reported in the literature. The codes obtained by the multi-objective approach are more robust and structurally more similar to the standard code.

      PubDate: 2017-10-18T02:06:14Z
      DOI: 10.1016/j.ins.2017.10.022
      Issue No: Vol. 425 (2017)
       
  • Mining massive hierarchical data using a scalable probabilistic graphical
           model
    • Authors: Khalifeh AlJadda; Mohammed Korayem; Camilo Ortiz; Trey Grainger; John A. Miller; Khaled M. Rasheed; Krys J. Kochut; Hao Peng; William S. York; Rene Ranzinger; Melody Porterfield
      Pages: 62 - 75
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Khalifeh AlJadda, Mohammed Korayem, Camilo Ortiz, Trey Grainger, John A. Miller, Khaled M. Rasheed, Krys J. Kochut, Hao Peng, William S. York, Rene Ranzinger, Melody Porterfield
      Probabilistic Graphical Models (PGM) are very useful in the fields of machine learning and data mining. The crucial limitation of those models, however, is their scalability. The Bayesian Network, which is one of the most common PGMs used in machine learning and data mining, demonstrates this limitation when the training data consists of random variables, in which each of them has a large set of possible values. In the big data era, one could expect new extensions to the existing PGMs to handle the massive amount of data produced these days by computers, sensors and other electronic devices. With hierarchical data - data that is arranged in a treelike structure with several levels - one may see hundreds of thousands or millions of values distributed over even just a small number of levels. When modeling this kind of hierarchical data across large data sets, unrestricted Bayesian Networks may become infeasible for representing the probability distributions. In this paper, we introduce an extension to Bayesian Networks that can handle massive sets of hierarchical data in a reasonable amount of time and space. The proposed model achieves high precision and high recall when used as a multi-label classifier for the annotation of mass spectrometry data. On another data set of 1.5 billion search logs provided by CareerBuilder.com, the model was able to predict latent semantic relationships among search keywords with high accuracy.

      PubDate: 2017-10-18T02:06:14Z
      DOI: 10.1016/j.ins.2017.10.014
      Issue No: Vol. 425 (2017)
       
  • Imbalanced enterprise credit evaluation with DTE-SBD: Decision tree
           ensemble based on SMOTE and bagging with differentiated sampling rates
    • Authors: Jie Sun; Jie Lang; Hamido Fujita; Hui Li
      Pages: 76 - 91
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Jie Sun, Jie Lang, Hamido Fujita, Hui Li
      Enterprise credit evaluation model is an important tool for bank and enterprise risk management, but how to construct an effective decision tree (DT) ensemble model for imbalanced enterprise credit evaluation is seldom studied. This paper proposes a new DT ensemble model for imbalanced enterprise credit evaluation based on the synthetic minority over-sampling technique (SMOTE) and the Bagging ensemble learning algorithm with differentiated sampling rates (DSR), which is named as DTE-SBD (Decision Tree Ensemble based on SMOTE, Bagging and DSR). In different times of iteration for base DT classifier training, new positive (high risky) samples are produced to different degrees by SMOTE with DSR, and different numbers of negative (low risky) samples are drawn with replacement by Bagging with DSR. However, in the same time of iteration with certain sampling rate, the training positive samples including the original and the new are of the same number as the drawn training negative samples, and they are combined to train a DT base classifier. Therefore, DTE-SBD can not only dispose the class imbalance problem of enterprise credit evaluation, but also increase the diversity of base classifiers for DT ensemble. Empirical experiment is carried out for 100 times with the financial data of 552 Chinese listed companies, and the performance of imbalanced enterprise credit evaluation is compared among the six models of pure DT, over-sampling DT, over-under-sampling DT, SMOTE DT, Bagging DT, and DTE-SBD. The experimental results indicate that DTE-SBD significantly outperforms the other five models and is effective for imbalanced enterprise credit evaluation.

      PubDate: 2017-10-18T02:06:14Z
      DOI: 10.1016/j.ins.2017.10.017
      Issue No: Vol. 425 (2017)
       
  • Hidden community detection in social networks
    • Authors: Kun He; Yingru Li; Sucheta Soundarajan; John E. Hopcroft
      Pages: 92 - 106
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Kun He, Yingru Li, Sucheta Soundarajan, John E. Hopcroft
      This paper introduces a new graph-theoretical concept of hidden community for analysing complex networks, which contain both stronger or dominant communities and weak communities. The weak communities are termed as being with the hidden community structure if most of its members also belong to the stronger communities. We propose a meta-approach, namely HICODE (HIdden COmmunity DEtection), for identifying the hidden community structure as well as enhancing the detection of the dominant community structure. Extensive experiments on real-world networks are carried out and the obtained results demonstrate that HICODE outperforms several state-of-the-art community detection methods in terms of uncovering both the dominant and the hidden structure. Due to the difficulty of labeling all ground truth communities in real-world datasets, HICODE provides a promising technique to pinpoint the existing latent communities and uncover communities for which there is no ground truth. Our finding in this work is significant to detect hidden communities in complex social networks.

      PubDate: 2017-10-18T02:06:14Z
      DOI: 10.1016/j.ins.2017.10.019
      Issue No: Vol. 425 (2017)
       
  • Multi-type clustering and classification from heterogeneous networks
    • Authors: Gianvito Pio; Francesco Serafino; Donato Malerba; Michelangelo Ceci
      Pages: 107 - 126
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Gianvito Pio, Francesco Serafino, Donato Malerba, Michelangelo Ceci
      Heterogeneous information networks consist of different types of objects and links. They can be found in several social, economic and scientific fields, ranging from the Internet to social sciences, including biology, epidemiology, geography, finance and many others. In the literature, several clustering and classification algorithms have been proposed which work on network data, but they are usually tailored for homogeneous networks, they make strong assumptions on the network structure (e.g. bi-typed networks or star-structured networks), or they assume that data are independently and identically distributed (i.i.d.). However, in real-world networks, objects can be of multiple types and several kinds of relationship can be identified among them. Moreover, objects and links in the network can be organized in an arbitrary structure where connected objects share some characteristics. This violates the i.i.d. assumption and possibly introduces autocorrelation. To overcome the limitations of existing works, in this paper we propose the algorithm HENPC, which is able to work on heterogeneous networks with an arbitrary structure. In particular, it extracts possibly overlapping and hierarchically-organized heterogeneous clusters and exploits them for predictive purposes. The different levels of the hierarchy which are discovered in the clustering step give us the opportunity to choose either more globally-based or more locally-based predictions, as well as to take into account autocorrelation phenomena at different levels of granularity. Experiments on real data show that HENPC is able to significantly outperform competitor approaches, both in terms of clustering quality and in terms of classification accuracy.

      PubDate: 2017-10-18T02:06:14Z
      DOI: 10.1016/j.ins.2017.10.021
      Issue No: Vol. 425 (2017)
       
  • On asynchronous event-triggered control of decentralized networked systems
    • Authors: Yanpeng Guan; Qing-Long Han; Xiaohua Ge
      Pages: 127 - 139
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Yanpeng Guan, Qing-Long Han, Xiaohua Ge
      This paper deals with the problem of asynchronous event-triggered control for decentralized networked systems. A group of geographically distributed sensors rather than a centralized sensor are deployed to asynchronously sample and measure the system’s state signal. First, an asynchronous decentralized event-triggered transmission scheme is developed to check which sensor measurements should be transmitted to a remote control station via a network channel. Second, a series of artificial piecewise functions are introduced to incorporate the simultaneous effects of intermittent transmissions and network-induced delays into multiple delays. By considering distribution characteristics of these delays and using a switching Lyapunov–Krasovskii functional, criteria on L 2 stability analysis and control design are derived. Finally, a well-used batch reactor system is employed to illustrate the effectiveness of the proposed control design method.

      PubDate: 2017-10-18T02:06:14Z
      DOI: 10.1016/j.ins.2017.10.024
      Issue No: Vol. 425 (2017)
       
  • Topology potential based seed-growth method to identify protein complexes
           on dynamic PPI data
    • Authors: Xiujuan Lei; Yuchen Zhang; Shi Cheng; Fang-Xiang Wu; Witold Pedrycz
      Pages: 140 - 153
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Xiujuan Lei, Yuchen Zhang, Shi Cheng, Fang-Xiang Wu, Witold Pedrycz
      Protein complexes are very important for investigating the characteristics of biological processes. Identifying protein complexes from protein–protein interaction (PPI) networks is one of the recent research endeavors. The critical step of the seed-growth algorithms used for identifying protein complexes from PPI networks is to detect seed nodes (proteins) from which protein complexes are growing up in PPI networks. Topology potential was proposed to understand the evolution behavior and organizational principles of complex networks such as PPI networks. Furthermore, PPI networks are inherently dynamic in nature. In this study, we proposed a new seed-growing algorithm (called TP-WDPIN) for identifying protein complexes, which employs the concept of topology potential to detect significant proteins and mine protein complexes from Weighted Dynamic PPI Networks. To investigate the performance of the method, the TP-WDPIN algorithm was applied to four PPI databases and compared the obtained results to those produced by six other competing algorithms. Experimental results have demonstrated that the proposed TP-WDPIN algorithm exhibits better performance than other methods such as MCODE, MCL, CORE, CSO, ClusterONE, COACH when experimenting with four PPI databases (DIP, Krogan, MIPS, Gavin).

      PubDate: 2017-10-26T06:26:07Z
      DOI: 10.1016/j.ins.2017.10.013
      Issue No: Vol. 425 (2017)
       
  • Idempotent nullnorms on bounded lattices
    • Authors: Gül Deniz Çaylı; Funda Karaçal
      Pages: 154 - 163
      Abstract: Publication date: January 2018
      Source:Information Sciences, Volume 425
      Author(s): Gül Deniz Çaylı, Funda Karaçal
      Nullnorms are generalizations of triangular norms and triangular conorms with a zero element to be an arbitrary point from a bounded lattice. In this paper, we study and discuss the existence of idempotent nullnorms on bounded lattices. Considering an arbitrary distributive bounded lattice L, we show that there exists a unique idempotent nullnorm on L. We prove that an idempotent nullnorm may not always exist on an arbitrary bounded lattice. Furthermore, we propose a construction method to obtain idempotent nullnorms on a bounded lattice L with an additional constraint on a for the given zero element a ∈ L\{0, 1}.

      PubDate: 2017-10-26T06:26:07Z
      DOI: 10.1016/j.ins.2017.10.003
      Issue No: Vol. 425 (2017)
       
  • Canonical decomposition of belief functions based on Teugels’
           representation of the multivariate Bernoulli distribution
    • Authors: Pichon
      Abstract: Publication date: February 2018
      Source:Information Sciences, Volume 428
      Author(s): Frédéric Pichon
      A canonical decomposition of belief functions is a unique decomposition of belief functions into elementary pieces of evidence. Smets found an equivalent representation of belief functions, which he interpreted as a canonical decomposition. However, his proposal is not entirely satisfactory as it involves elementary pieces of evidence, corresponding to a generalisation of belief function axioms, whose semantics lacks formal justifications. In this paper, a new canonical decomposition relying only on well-defined concepts is proposed. In particular, it is based on a means to induce belief functions from the multivariate Bernoulli distribution and on Teugels’ representation of this distribution, which consists of the means and the central moments of the underlying Bernoulli random variables. According to our decomposition, a belief function results from as many crisp pieces of information as there are elements in its domain, and from simple probabilistic knowledge concerning their marginal reliability and the dependencies between their reliability. In addition, we show that instead of interpreting with some difficulty Smets’ representation of belief functions as a canonical decomposition, it is possible to give it a different and well-defined semantics in terms of measures of information associated with the reliability of the pieces of information in our decomposition.

      PubDate: 2017-11-10T02:12:35Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016